[AINews] DeepSeek v3: 671B finegrained MoE trained for $5.5m USD of compute on 15T tokens
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Full co-design of algirthms, frameworks, and hardware is all you need.
AI News for 12/25/2024-12/26/2024. We checked 7 subreddits, 433 Twitters and 32 Discords (215 channels, and 5486 messages) for you. Estimated reading time saved (at 200wpm): 548 minutes. You can now tag @smol_ai for AINews discussions!
As teased over the Christmas break, DeepSeek v3 is here (our previous coverage of DeepSeek v2 here). The benchmarks are as good as you've come to expect from China's frontier open model lab:
(more details on aider and bigcodebench)
But the training details are even better:
- trained on 8-11x less the normal budget of these kinds of models: specifically 2048 H800s (aka "nerfed H100s"), in 2 months. Llama 3 405B was, per their paper, trained on 16k H100s. They estimate this cost $5.5m USD,
- homegrown native FP8 mixed precision training (without having access to Blackwell GPUs - as Shazeer intended?)
- Scaling up Multi-Head Latent Attention from DeepSeek v2
- distilling from R1-generated synthetic reasoning data and using other kinds of reward models
- no need for tensor parallelism - recently named by Ilya as a mistake
- pruning + healing for DeepSeekMoE style MoEs, scaled up to 256 experts (8 active + 1 shared)
- a new "multi token prediction" objective (from Better & Faster Large Language Models via Multi-token Prediction) that allows the model to look ahead and preplan future tokens (in this case just 2 at a time)
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- Codeium (Windsurf) Discord
- Cursor IDE Discord
- aider (Paul Gauthier) Discord
- Nous Research AI Discord
- OpenRouter (Alex Atallah) Discord
- LM Studio Discord
- Stability.ai (Stable Diffusion) Discord
- OpenAI Discord
- Stackblitz (Bolt.new) Discord
- Unsloth AI (Daniel Han) Discord
- Perplexity AI Discord
- Latent Space Discord
- Interconnects (Nathan Lambert) Discord
- GPU MODE Discord
- Notebook LM Discord Discord
- LlamaIndex Discord
- Eleuther Discord
- tinygrad (George Hotz) Discord
- Cohere Discord
- Modular (Mojo 🔥) Discord
- DSPy Discord
- LLM Agents (Berkeley MOOC) Discord
- OpenInterpreter Discord
- Nomic.ai (GPT4All) Discord
- LAION Discord
- MLOps @Chipro Discord
- PART 2: Detailed by-Channel summaries and links
- Codeium (Windsurf) â–· #content (1 messages):
- Codeium (Windsurf) ▷ #discussion (433 messages🔥🔥🔥):
- Codeium (Windsurf) ▷ #windsurf (869 messages🔥🔥🔥):
- Cursor IDE ▷ #general (744 messages🔥🔥🔥):
- aider (Paul Gauthier) â–· #announcements (1 messages):
- aider (Paul Gauthier) ▷ #general (557 messages🔥🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (49 messages🔥):
- aider (Paul Gauthier) â–· #links (2 messages):
- Nous Research AI ▷ #general (324 messages🔥🔥):
- Nous Research AI â–· #ask-about-llms (2 messages):
- Nous Research AI â–· #research-papers (2 messages):
- Nous Research AI â–· #research-papers (2 messages):
- OpenRouter (Alex Atallah) â–· #announcements (2 messages):
- OpenRouter (Alex Atallah) â–· #app-showcase (2 messages):
- OpenRouter (Alex Atallah) ▷ #general (301 messages🔥🔥):
- LM Studio ▷ #general (153 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (101 messages🔥🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (226 messages🔥🔥):
- OpenAI ▷ #ai-discussions (183 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (33 messages🔥):
- OpenAI â–· #prompt-engineering (1 messages):
- OpenAI â–· #api-discussions (1 messages):
- Stackblitz (Bolt.new) â–· #prompting (7 messages):
- Stackblitz (Bolt.new) ▷ #discussions (198 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (132 messages🔥🔥):
- Unsloth AI (Daniel Han) â–· #off-topic (4 messages):
- Unsloth AI (Daniel Han) ▷ #help (37 messages🔥):
- Unsloth AI (Daniel Han) â–· #research (5 messages):
- Perplexity AI ▷ #general (134 messages🔥🔥):
- Perplexity AI ▷ #sharing (14 messages🔥):
- Perplexity AI â–· #pplx-api (4 messages):
- Latent Space ▷ #ai-general-chat (103 messages🔥🔥):
- Latent Space â–· #ai-announcements (3 messages):
- Interconnects (Nathan Lambert) ▷ #news (45 messages🔥):
- Interconnects (Nathan Lambert) â–· #ml-questions (5 messages):
- Interconnects (Nathan Lambert) ▷ #ml-drama (15 messages🔥):
- Interconnects (Nathan Lambert) â–· #random (6 messages):
- Interconnects (Nathan Lambert) â–· #memes (4 messages):
- Interconnects (Nathan Lambert) â–· #nlp (3 messages):
- Interconnects (Nathan Lambert) ▷ #reads (8 messages🔥):
- GPU MODE ▷ #general (12 messages🔥):
- GPU MODE ▷ #triton (8 messages🔥):
- GPU MODE ▷ #cuda (14 messages🔥):
- GPU MODE â–· #torch (3 messages):
- GPU MODE â–· #algorithms (1 messages):
- GPU MODE â–· #cool-links (2 messages):
- GPU MODE â–· #beginner (5 messages):
- GPU MODE â–· #pmpp-book (3 messages):
- GPU MODE â–· #jax (1 messages):
- GPU MODE â–· #torchao (1 messages):
- GPU MODE â–· #sequence-parallel (1 messages):
- GPU MODE â–· #off-topic (1 messages):
- GPU MODE â–· #irl-meetup (1 messages):
- GPU MODE â–· #hqq-mobius (1 messages):
- GPU MODE â–· #triton-viz (1 messages):
- GPU MODE â–· #llmdotc (1 messages):
- GPU MODE â–· #rocm (1 messages):
- GPU MODE â–· #intel (1 messages):
- GPU MODE â–· #lecture-qa (1 messages):
- GPU MODE â–· #bitnet (7 messages):
- GPU MODE â–· #arm (1 messages):
- GPU MODE â–· #sparsity-pruning (2 messages):
- GPU MODE â–· #webgpu (1 messages):
- GPU MODE â–· #liger-kernel (1 messages):
- GPU MODE â–· #metal (2 messages):
- GPU MODE â–· #avx (1 messages):
- GPU MODE ▷ #� (1 messages):
- GPU MODE â–· #edge (1 messages):
- GPU MODE ▷ #arc-agi-2 (9 messages🔥):
- Notebook LM Discord â–· #use-cases (7 messages):
- Notebook LM Discord ▷ #general (54 messages🔥):
- LlamaIndex â–· #blog (1 messages):
- LlamaIndex ▷ #general (36 messages🔥):
- LlamaIndex â–· #ai-discussion (3 messages):
- Eleuther ▷ #general (16 messages🔥):
- Eleuther ▷ #research (17 messages🔥):
- Eleuther â–· #interpretability-general (6 messages):
- Eleuther â–· #multimodal-general (1 messages):
- tinygrad (George Hotz) ▷ #general (8 messages🔥):
- tinygrad (George Hotz) ▷ #learn-tinygrad (31 messages🔥):
- Cohere ▷ #discussions (13 messages🔥):
- Cohere â–· #questions (6 messages):
- Cohere ▷ #cmd-r-bot (15 messages🔥):
- Cohere â–· #projects (1 messages):
- Modular (Mojo 🔥) ▷ #general (6 messages):
- Modular (Mojo 🔥) ▷ #mojo (19 messages🔥):
- Modular (Mojo 🔥) ▷ #max (2 messages):
- DSPy â–· #show-and-tell (1 messages):
- DSPy ▷ #general (12 messages🔥):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (8 messages🔥):
- OpenInterpreter â–· #general (7 messages):
- Nomic.ai (GPT4All) â–· #general (3 messages):
- LAION â–· #general (2 messages):
- MLOps @Chipro â–· #general-ml (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Model Developments and Releases
- DeepSeek-V3 Launch and Performance: @deepseek_ai and @reach_vb announced the release of DeepSeek-V3, featuring 671B MoE parameters and trained on 14.8T tokens. This model outperforms GPT-4o and Claude Sonnet-3.5 in various benchmarks.
- Compute Efficiency and Cost-Effectiveness: @scaling01 highlighted that DeepSeek-V3 was trained using only 2.788M H800 GPU hours, significantly reducing costs compared to models like Llama 3 which used 30.8M GPU-hours.
- Deployment and Accessibility: @DeepLearningAI and @reach_vb shared updates on deploying DeepSeek-V3 through platforms like Hugging Face, emphasizing its open-source availability and API compatibility.
AI Research Techniques and Benchmarks
- OREO and NLRL Innovations: @TheTuringPost discussed the OREO method and Natural Language Reinforcement Learning (NLRL), showcasing their effectiveness in multi-step reasoning and agent control tasks.
- Chain-of-Thought Reasoning Without Prompting: @denny_zhou introduced a breakthrough in Chain-of-Thought (CoT) reasoning by fine-tuning models to reason intrinsically without relying on task-specific prompts, significantly enhancing model reasoning capabilities.
- Benchmark Performance: @francoisfleuret and @TheTuringPost reported that new techniques like Multi-Token Prediction (MTP) and Chain-of-Knowledge consistently outperform existing benchmarks in areas such as math problem-solving and agent control.
Open Source AI vs Proprietary AI
- Competitive Edge of Open-Source Models: @scaling01 emphasized that DeepSeek-V3 now matches or exceeds proprietary models like GPT-4o and Claude Sonnet-3.5, advocating for the sustainability and innovation driven by open-source AI.
- Licensing and Accessibility: @deepseek_ai highlighted that DeepSeek-V3 is open-source and licensed for commercial use, making it a liberal alternative to closed models and promoting wider accessibility for developers and enterprises.
- Economic Implications: @reach_vb and @DeepLearningAI discussed how open-source AI democratizes access, reduces dependency on high-margin proprietary models, and fosters a more inclusive AI ecosystem.
AI Infrastructure and Compute Resources
- Optimizing GPU Usage: @francoisfleuret and @scaling01 explored how DeepSeek-V3 leverages H800 GPUs efficiently through techniques like Multi-Token Prediction (MTP) and Load Balancing, enhancing compute utilization and training efficiency.
- Hardware Design Improvements: @francoisfleuret suggested hardware enhancements such as improved FP8 GEMM and better quantization support to support MOE training, addressing communication bottlenecks and compute inefficiencies.
- Cost-Effective Scaling Strategies: @reach_vb detailed how DeepSeek-V3 achieved state-of-the-art performance with a fraction of the typical compute resources, emphasizing algorithm-framework-hardware co-design to maintain cost-effectiveness while scaling.
Immigration and AI Talent Policies
- Advocacy for Skilled Immigration: @AndrewYNg and @HamelHusain stressed the importance of high-skill immigration programs like H-1B and O-1 visas for fostering innovation and economic growth within the AI sector.
- Policy Critiques and Recommendations: @bindureddy and @HamelHusain critiqued restrictive visa policies, advocating for easier visa transitions, eliminating job-specific restrictions, and expanding legal immigration to enhance US AI competitiveness and innovation.
- Economic and Moral Arguments: @AndrewYNg highlighted that immigrants create more jobs than they take, framing visa reforms as both an economic imperative and a moral issue to support the American economy.
Memes and Humor
- Fun Interactions and Memes: @HamelHusain humorously remarked on the misunderstandings in AI model performances, bringing a casual and entertaining tone to the technical discourse.
- Playful AI Conversations: @teortaxesTex posted a meme-like comment, injecting humor into the conversation about AI capabilities.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. DeepSeek V3 Release: Technical Innovations and Benchmarks
- DeepSeek-V3 Officially Released (Score: 101, Comments: 22): DeepSeek has released DeepSeek-V3, featuring a Mixture of Experts (MoE) architecture with 671B total parameters and 37B activated parameters, outperforming other open-source models and matching proprietary models like GPT-4o and Claude-3.5-Sonnet. The model shows significant improvements in knowledge-based tasks, long text evaluations, coding, mathematics, and Chinese language capabilities, with a 3x increase in token generation speed. The open-source model supports FP8 weights, with community tools like SGLang and LMDeploy offering native FP8 inference support, and a promotional API pricing period until February 8, 2025.
- DeepSeek-V3's FP8 Training: The model is trained using a FP8 mixed precision training framework, marking a first in validating FP8 training's feasibility on a large-scale model. This approach yielded a stable training process without any irrecoverable loss spikes or rollbacks, prompting curiosity about whether DeepSeek has effectively "cracked" FP8 training.
- Economic and Technical Considerations: Training DeepSeek-V3 cost $5.5 million, highlighting the economic efficiency valued by the quantitative firm behind it. Discussions also touched on potential GPU sanctions influencing the model's design, suggesting it may be optimized for CPU and RAM use, with mentions of running it on Epyc boards.
- Community and Open Source Dynamics: There is a distinction between open-source and free software, with comments noting that DeepSeek-V3's release on r/localllama targets the local community rather than broader open-source promotion. Some users humorously noted the model's release on Christmas, likening it to a surprise announcement from a Chinese "Santa."
- Deepseek V3 Chat version weights has been uploaded to Huggingface (Score: 143, Comments: 67): DeepSeek V3 chat version weights are now available on Huggingface, providing access to the latest iteration of this AI model.
- Hardware Requirements and Performance: Discussions highlight the significant hardware requirements for running DeepSeek V3, with mentions of needing 384GB RAM and four RTX 3090s for one-bit quantization. Users discuss various quantization levels and their VRAM requirements, with a humorous tone about needing to sell assets to afford the necessary GPUs.
- Open Source and Competition: There's a lively debate about open-source models outperforming proprietary ones, with references to Elon Musk's X.AI and the irony of open-source models potentially surpassing his proprietary Groq2 and Groq3 models. The conversation underscores the value of open-source competition in driving technological advancement.
- Model Size and Complexity: The model's size, at 685B parameters and 163 shards, is a focal point of discussion, with users joking about the impracticality of needing 163 GPUs. This highlights the challenges of handling such a large and complex model in terms of both hardware and software implementation.
- Sonnet3.5 vs v3 (Score: 83, Comments: 19): DeepSeek V3 significantly outperforms Sonnet 3.5 in benchmarks, as illustrated by an animated image showing a dramatic confrontation between characters labeled "Claude" and "deepseek." The scene conveys a dynamic and competitive environment, emphasizing the notable performance gap between the two.
- DeepSeek V3 is significantly more cost-effective, being 57 times cheaper than Sonnet 3.6, and offers nearly unlimited availability on its website, compared to Claude's limited access even for paid users.
- There is some concern about DeepSeek V3's low context window, though its price-to-performance ratio is highly praised, rated as 10/10 by users.
- Users express interest in real-world testing of DeepSeek V3 to verify benchmark results, with a suggestion to include it in lmarena's webdev arena for a more comprehensive comparison against Sonnet.
Theme 2. Cost Efficiency of DeepSeek V3 vs Competition
- PSA - Deepseek v3 outperforms Sonnet at 53x cheaper pricing (API rates) (Score: 291, Comments: 113): Deepseek V3 outperforms Sonnet while being 53x cheaper in API rates, which is a significant difference even compared to a 3x price disparity. The author expresses interest in Anthropic and suggests that they might still pay more for superior performance in coding tasks if a model offers a substantial improvement.
- The training cost for Deepseek V3 was $5.6M, utilizing 2,000 H800s over less than two months, highlighting potential efficiencies in LLM training. The model's API pricing is significantly cheaper than Claude Sonnet, with costs of $0.14/1M in and $0.28/1M out compared to $3/1M in and $15/1M out for Sonnet, making it ~5x cheaper than some local builds' electricity costs.
- Deepseek V3's context window is only 64k, which might contribute to its cost-effectiveness, though it still underperforms against Claude in some benchmarks. There is a discussion on the model's parameter size (37B active parameters) and the use of MoE (Mixture of Experts) to reduce inference costs.
- Concerns about data usage and training on API requests were raised, with some skepticism about the model's performance and data practices. There is anticipation for Deepseek V3's availability on platforms like OpenRouter, with mentions of promotions running until February to further reduce costs.
- Deepseek V3 benchmarks are a reminder that Qwen 2.5 72B is the real king and everyone else is joking! (Score: 86, Comments: 46): DeepSeek V3 benchmarks demonstrate Qwen 2.5 72B as a leading model, outperforming others such as Llama-3.1-405B, GPT-4o-0513, and Claude 3.5 across several benchmarks. Notably, DeepSeek-V3 excels in the MATH 500 benchmark with a score of 90.2%, highlighting its superior accuracy.
- Discussion highlights the cost-efficiency of running models like DeepSeek V3 on servers for multiple users instead of local setups with GPUs like 2x3090, emphasizing savings on electricity and hardware. OfficialHashPanda notes the advantages of MoE (Mixture of Experts), which allows for reduced active parameters while increasing capabilities, making it suitable for serving many users.
- Comments explore the hardware requirements and costs, with mentions of using cheap RAM and server CPUs with high memory bandwidth for running large models efficiently. The conversation contrasts the cost of APIs versus local hardware setups, suggesting server-based solutions are more economical for large-scale usage.
- The potential for smaller, efficient models is discussed, with interest in what a DeepSeek V3 Lite could offer. Calcidiol suggests that future "lite" models might match the capabilities of today's larger models by leveraging better training data and techniques, indicating the ongoing evolution and optimization of AI models.
Theme 3. FP8 Training Breakthrough in DeepSeek V3
- Deepseek V3 is officially released (code, paper, benchmark results) (Score: 372, Comments: 96): DeepSeek V3 has been officially released, featuring FP8 training capabilities. The release includes access to the code, a research paper, and benchmark results, marking a significant development in the field of AI training methodologies.
- DeepSeek V3's Performance and Capabilities: Despite its impressive architecture and FP8 training, DeepSeek V3 still trails behind models like Claude Sonnet 3.5 in some benchmarks. However, it is praised for being the strongest open-weight model currently available, with potential for easier self-hosting if the model size is reduced.
- Technical Requirements and Costs: Running DeepSeek V3 requires substantial resources, such as 384GB RAM for a 600B model, and could cost around $10K for a basic setup. Users discuss various hardware configurations, including EPYC servers and the feasibility of CPU-only inference, highlighting the need for extensive RAM and VRAM.
- Innovative Features and Licensing Concerns: The model introduces innovative features like Multi-Token Prediction (MTP) and efficient FP8 mixed precision training, significantly reducing training costs to 2.664M GPU hours. However, licensing issues are a concern, as the Deepseek license is seen as highly restrictive for commercial use.
- Wow this maybe probably best open source model ? (Score: 284, Comments: 99): DeepSeek-V3 demonstrates exceptional performance as an open-source model, surpassing its predecessors and competitors like DeepSeek-V2.5, Qwen2.5-72B-Inst, Llama-3.1-405B-Inst, GPT-4o-0513, and Claude-3.5-Sonnet-1022. Notably, it achieves a 90.2% accuracy on the MATH 500 benchmark, indicating its robust training stability and efficiency when using FP8.
- Inference Challenges and Capabilities: Users discussed the difficulty of running DeepSeek-V3 locally due to its 671B parameters, with 4-bit quantization needing at least 336GB of RAM. Despite this, it can achieve around 10 tokens/second in CPU inference on a 512GB dual Epyc system due to its 37B active parameters and mixture of experts architecture with 256 experts.
- Model Comparison and Performance: The model's performance is celebrated as comparable to closed-source models like GPT-4o and Claude-3.5-Sonnet, with some users noting its potential to outperform in goal-oriented tasks despite possibly lagging in instruction adherence compared to Llama.
- Open Weights vs. Open Source: There is confusion and clarification around the model being open weights rather than fully open source, with discussions about the implications and potential for future distillation to a smaller, more manageable size like 72B parameters.
Other AI Subreddit Recap
/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT
Theme 1. OpenAI O1 Model Impacting Financial Markets
- A REAL use-case of OpenAI o1 in trading and investing (Score: 232, Comments: 207): The OpenAI O1 model demonstrates significant advancements in financial research and trading strategies, outperforming traditional models by offering precise, data-driven solutions and supporting function-calling for generating JSON objects. Notably, the model's ability to perform accurate financial analysis, such as identifying SPY's 5% drops over 7-day periods since 2000, enables the creation of sophisticated trading strategies that can be deployed without coding. The model's enhanced capabilities, including vision API and improved reasoning, make it accessible for complex financial tasks, potentially transforming finance and Wall Street by democratizing algorithmic trading and research.
- Commenters highlighted the challenges and skepticism around using AI for financial markets, emphasizing issues like data leakage and the efficient market hypothesis. Many pointed out that historical backtesting doesn't guarantee future success due to market adaptability and randomness, suggesting that these models may not perform well in real-time scenarios.
- Behavioral aspects of investing were discussed, with some users noting that emotional responses to market fluctuations, such as selling during a drawdown, can undermine strategies. The importance of understanding risk-reward dynamics and avoiding overconfidence in AI-generated strategies was also stressed.
- A few users shared personal experiences and projects, like a Vector Stock Market bot using various LLMs, but acknowledged the limitations and need for further testing. The general consensus was that AI might democratize access to tools but won't necessarily lead to consistent outperformance due to inherent market complexities.
Theme 2. Debates Surrounding O1 Pro Mode's Usefulness
- o1 pro mode is pathetic. (Score: 177, Comments: 133): The post criticizes OpenAI's O1 Pro mode, describing it as overpriced and inefficient for programming tasks compared to 4o, due to its slow output generation. The author, self-identified as an AI amateur, argues that the models are overfit for benchmarks and not practical for real-world applications, suggesting that the "reasoning models" are primarily a marketing tactic. The only practical use noted is in alignment tasks, where the model assesses user intent.
- The o1 Pro model receives mixed reviews; some users find it invaluable for complex programming tasks, citing its ability to handle large codebases and produce accurate results on the first try, while others criticize its slow response time and dated knowledge cutoff. Users like ChronoPsyche and JohnnyTheBoneless praise its capability in handling complex tasks, whereas others, like epistemole, argue that unlimited rate limits are the real advantage, not the model's performance.
- Several users emphasize the importance of detailed prompts for maximizing o1 Pro's potential, suggesting that providing a comprehensive document or using iterative methods with large context windows can yield better results compared to feeding small code snippets. Pillars-In-The-Trees compares effective prompting to instructing a graduate student, highlighting the model's proficiency in logical tasks.
- Discussions reveal that o1 Pro excels in certain programming languages, with users like NootropicDiary mentioning its superiority in Rust over other models like Claude, while others find Claude more effective for different languages such as TypeScript. This reinforces the notion that the model's effectiveness can vary significantly depending on the task and language used.
Theme 3. OpenAI's Latest Developments and Tools Overview
- 12 Days of OpenAi - a comprehensive summary. (Score: 227, Comments: 25): The "12 Days of OpenAI" grid captures daily highlights between December 5th and December 20th, featuring updates like the ChatGPT Pro Plan on December 5th and Reinforcement Fine-Tuning on December 6th. The series culminates with o3 and o3-mini advancements on December 20th, indicating progress towards AGI.
- Reinforcement Fine-Tuning on Day 2 is highlighted as a significant development, with potential to substantially improve systems based on minimal examples. While the absence of a formal paper leaves some uncertainty, its implications for agent development are considered promising, especially looking towards 2025.
- Discussion around Canvas UX indicates that its recent update was minor, with some users expressing dissatisfaction over its limitation to approximately 200 lines. Despite being a past introduction, it remains a point of contention among users.
- There is curiosity about the availability of a Windows app following the MacOS app update, with a humorous suggestion that it may coincide with Microsoft building OpenAI a nuclear reactor.
Theme 4. ChatGPT Downtime and User Impact
- CHAT GPT IS DOWN. (Score: 366, Comments: 206): ChatGPT experienced a significant service disruption, with a peak of 5,315 outages reported at 6:00 PM. The graph highlights a sharp increase in outage reports after a period of minimal activity, indicating widespread user impact.
- Users express frustration and humor over the ChatGPT outage, with some joking about the reliance on AI for tasks like homework and productivity. Street-Inspectors humorously notes the irony of asking ChatGPT why it isn't working.
- There is a reference to OpenAI's status page and Downdetector as sources for checking outage status, with a link provided by bashbang showing the major outage affecting ChatGPT, API, and Sora.
- Kenshiken and BuckyBoy3855 mention an "upstream provider" as the cause of the issue, highlighting the technical aspect of the outage, while HappinessKitty speculates about server capacity issues.
AI Discord Recap
A summary of Summaries of Summaries by o1-2024-12-17
Theme 1. DeepSeek V3 Takes Center Stage
- Massive Mixed-Precision Boost: DeepSeek V3 unveiled a 685B-parameter model with FP8 training, claiming 2 orders of magnitude cost savings. It runs at about 60 tokens/second, trained on 14.8T tokens, and many regard it as a strong open competitor to GPT-4o.
- API Spread and Triple Usage: OpenRouter reported DeepSeek V3 usage tripled post-release, rivaling pricier incumbents. Community members praised its robust coding performance but flagged slow responses and large VRAM demands.
- MoE Structure Stirs Hype: DeepSeek’s Mixture-of-Experts architecture offers clearer scaling pathways and drastically lower training costs. Engineers speculate about future open-source expansions and 320-GPU HPC clusters for stable inference.
Theme 2. Code Editors & IDE Woes
- Windsurf & Cascade Blues: Windsurf’s Cascade Base model earned criticism for mishandling coding prompts and devouring credits without results. Engineers proposed global_rules workarounds, yet many remain frustrated by UI lags and unresponsive queries.
- Cursor IDE's Token Trials: Cursor IDE struggles with limited context handling and performance dips for large code tasks. Users contrasted it with DeepSeek and Cline, praising their extended context windows for more robust code generation.
- Bolt Token Tensions: Stackblitz (Bolt.new) users burned up to 1.5 million tokens in repetitive code requests. Many demanded direct code edits over example snippets and turned to GitHub feedback for subscription-tier improvements.
Theme 3. AI Powers Creative & Collaborative Work
- Podcasting Meets Real-Time Summaries: NotebookLM users integrated Google News into AI-driven podcasts, generating comedic bits alongside current events. Some shared short 15-minute TTRPG recaps, highlighting AI’s ability to keep hobbyists swiftly informed.
- ERP and Roleplay: Enthusiasts wrote advanced prompts for immersive tabletop campaigns, ensuring continuity in complex narratives. They cited chunking and retrieval-augmented generation (RAG) as vital for stable long-form storytelling.
- Voice to Voice & Music Generation: AI engineers showcased voice-to-voice chat apps and music creation from text prompts. They invited collaborators to refine DNN-VAD pipelines, bridging audio conversion with generative text models in fun new workflows.
Theme 4. Retrieval, Fine-Tuning, and HPC Upscaling
- GitIngest & GitDiagram: Devs mapped massive codebases to text and diagrams for RAG experiments. This approach streamlined LLM training and code ingestion, letting HPC clusters process big repos more effectively.
- LlamaIndex & DocumentContextExtractor: Users plugged in batch processing to slice costs by 50% and handle tasks off-hours. Combining chunk splitting, local embeddings, and optional open-source RLHF tools improved accuracy on real-world data.
- Fine-Tuning VLMs & HPC MLOps: Researchers tackled LLaVA, Qwen-VL, and HPC frameworks like Guild AI to manage large-scale model training. They noted HPC’s overhead and debated rolling their own minimal ops solutions to avoid SaaS pitfalls.
Theme 5. Key Tech & Performance Fixes
- TMA Beats cp.async: HPC folks explained how TMA outperforms cp.async on H100 for GEMM, allowing bulk scheduling and lower register usage. They praised structured sparse kernels in CUTLASS for further gains, especially with FP8.
- Mojo & Modular Gains: Users debugged StringRef crashes and uncovered missing length checks in memcpy calls. They praised new merchandise and debated MAX vs XLA compile times, eyeing improvements for HPC code.
- Tinygrad vs. PyTorch Speed Race: Tinygrad trailed PyTorch on CUDA with 800ms vs. 17ms forward passes, but devs pinned hopes on beam search caching and jitting. They merged PR fixes for input-creation loops and hammered out matching-engine bounties to reduce overhead.
PART 1: High level Discord summaries
Codeium (Windsurf) Discord
- Windsurf's Bold Industry Breakthrough: A new video shows engineers detailing how Windsurf deliberately defies typical development approaches, sharing insights on workflow changes and design choices (Windsurf's Twitter).
- They also sent out a holiday greeting, emphasizing community spirit and sparking conversation about the fresh perspective behind these bold moves.
- Cascade Base Model Blues: Users criticized the Cascade Base model for lacking accuracy in complex coding tasks and often failing to follow simple commands, especially when compared to Claude 3.5 Sonnet.
- Though some shared partial successes with global rules, others found minimal improvement and posted their frustrations alongside links like awesome-windsurfrules.
- Remote-Host Hiccups & Lag: People connecting via SSH Remote Hosts noticed that Windsurf displayed significant delays, making real-time edits confusing and untracked until Cascade updated.
- They reported that commands would still execute properly, but the delayed interface created a disjointed workflow that many found disruptive.
- Credit Drains & Unresponsive Queries: Users felt shortchanged when unresponsive requests devoured tokens without delivering functional outputs, leading to repeated support contact via Windsurf Editor Support.
- Many voiced concern over these credit-consuming failures, suggesting they undermined confidence in Windsurf's reliability for extensive codebases.
Cursor IDE Discord
- DeepSeek V3 Dominates Discourse: Developers praised DeepSeek V3 for code generation and analysis, claiming it rivals Sonnet 3.5 and offers faster outputs, as seen in this tweet from DeepSeek.
- Community members discussed integration with Cursor IDE, referencing the DeepSeek Platform for an API-compatible approach, and shared interest in lower-cost usage.
- Cursor IDE's Token Trials: Many reported Cursor IDE has a limited context window, reducing performance for large code generation tasks, with the Cursor site offering downloads for various platforms.
- Users compared how DeepSeek and Cline handle extended context windows more efficiently, referencing the Cursor Forum for continuing feedback about better token utilization.
- Next.js UI Woes Trip Up Designers: Creators battled UI issues in Next.js, complaining that code generation from Claude sometimes misaligned elements and complicated styling, even after using libraries like shadcn.
- They recommended embedding relevant docs into context for better design outcomes and pointed to Uiverse for quick UI components.
- OpenAI's Reliability Rollercoaster: Some confronted OpenAI's recent performance issues, citing slower response times and reduced availability, with alternative models offering steadier results at lower cost.
- They advised testing multiple AI systems, referencing the DeepSeek API Docs for compatibility, while others simply toggled between providers to keep tasks moving forward.
aider (Paul Gauthier) Discord
- Aider v0.70.0 Revs Up Self-Code Stats: Aider v0.70.0 introduced analytics opt-in, new error handling for interactive commands, and expanded model support, described in detail within Aider Release History.
- Community members highlighted 74% self-contributed code from Aider, praising the tool's improved installation, watch files functionality, and Git name handling as major upgrades.
- DeepSeek V3 Delivers 3x Speed Gains: DeepSeek V3 now processes 60 tokens/second (3x faster than V2), showing stronger coding performance than Sonnet 3.5 and featuring a 64k token context limit, as seen in this tweet.
- Community voiced excitement about DeepSeek V3 outpacing Claude in some tasks, although slow responses and context management remain consistent points of discussion.
- BigCodeBench Spots LLM Strengths & Shortfalls: The BigCodeBench Leaderboard (link) evaluates LLMs on real-world programming tasks and references the arXiv paper for deeper methodology.
- Contributors compared DeepSeek and O1 scores, noting how these metrics helped clarify each model’s code-generation capabilities under practical conditions.
- GitDiagram & GitIngest Make Repos Transparent: GitDiagram converts GitHub repositories into interactive diagrams, while GitIngest renders any Git repo as plain text for hassle-free code ingestion.
- Users only replace 'hub' with 'diagram' or 'ingest' in the URL to instantly visualize repository structures or prepare them for any LLM.
Nous Research AI Discord
- DeepSeek V3’s GPU Gobbler Gains: DeepSeek V3 launched with 685 billion parameters and demands about 320 GPUs like H100 for optimum performance, as shown in the official code repository.
- Discussions emphasized its large-scale VRAM requirements for stable inference, with members calling it one of the largest open-weight models available.
- Differentiable Cache Speeds Reasoning: Research on Differentiable Cache Augmentation reveals a method for pairing a frozen LLM with an offline coprocessor that manipulates the key-value (kv) cache, as described in this paper.
- The approach cuts perplexity on reasoning tasks, with members observing it maintains LLM functionality even if the coprocessor goes offline.
- Text-to-Video Tussle: Hunyuan vs LTX: Users compared Hunyuan and LTX text-to-video models for performance, emphasizing VRAM requirements for smoother rendering.
- They showed interest in T2V developments, suggesting resource-intensive tasks might benefit from pipeline adjustments.
- URL Moderation API Puzzle: An AI engineer struggled to build a URL moderation API that classifies unsafe sites accurately, highlighting issues with Llama’s structured output and OpenAI’s frequent denials.
- Community feedback noted the importance of specialized domain handling, as repeated attempts produced inconsistent or partial results.
- Inference Cost Conundrum: Participants debated the cost structure for deploying large AI models, questioning whether promotional pricing could endure high-usage demand.
- They suggested continuous load might balance operational expenses, keeping high-performance AI services feasible despite cost concerns.
OpenRouter (Alex Atallah) Discord
- Web Searching LLMs & Price Plunge: OpenRouter introduced Web Search for any LLM, currently free to use, offering timely references for user queries, as shown in this demonstration. They also slashed prices for multiple models, including a 12% cut on qwen-2.5 and a 31% cut on hermes-3-llama-3.1-70b.
- Community members labeled the cuts significant and welcomed these changes, particularly for high-tier models. Some anticipate an even broader shift in cost structures across the board.
- DeepSeek v3 Triples Usage: DeepSeek v3 soared in popularity on OpenRouter, reportedly tripling usage since release and matching bigger models in some metrics, as per this post. It competes with Sonnet and GPT-4o at a lower price, fueling discussions that China has caught up in AI.
- Users in the general channel also shared mixed reviews about its performance for coding tasks and poetry. Some praised its creative outputs, while others flagged the results as inconsistent.
- Endpoints & Chat Woes: OpenRouter launched a beta Endpoints API to let devs pull model details, referencing an example usage here. Some users faced OpenRouter Chat lag with large conversation histories, calling for more responsive handling of big data sets.
- The community noted no direct support for batching requests, emphasizing timely GPU usage. Meanwhile, certain 'no endpoints found' errors stemmed from misconfigured API settings, highlighting the importance of correct setup.
- 3D Game Wizardry with Words: A newly shown tool promises 3D game creation from simple text prompts, improving on earlier attempts with o-1 and o-1 preview. The approach hints at future voxel engine integration for more complex shapes, as teased in this project link.
- Enthusiasts see it as a leap from prior GPT-based attempts, with features seemingly refined for building entire interactive experiences. Some in the channel believe it could transform indie game development pipeline if scaled.
- AI Chat Terminal: Agents in Action: The AI Chat Terminal (ACT) merges agent features with codebase interactions, letting users toggle between providers like OpenAI and Anthropic. It introduces an Agent Mode to automate tasks and aims to streamline coding sessions, as shown in this repo.
- Developers in the app-showcase channel highlighted the potential for flexible multi-model usage within a single terminal. Many praised the convenience for building scripts that transcend typical chat constraints.
LM Studio Discord
- LM Studio Calms Large Models: Build 0.3.5 eased prior bugs in GGUF model loading and tackled session handling for MLX, as referenced in Issue #63.
- Users noted that QVQ 72B and Qwentile2.5-32B run better now, though some memory leaks remain under investigation.
- RPG Fans Retain Narrative Flow: Enthusiasts used models like Mistral and Qwen to manage long-running tabletop storylines, with prompts inspired by ChatGPT TTRPG guides.
- They explored fine-tuning and RAG techniques for better continuity, citing separate chunking as a strategy to keep the lore consistent.
- X99 Systems Keep Pace: Users running Xeon E5 v4 on X99 motherboards reported solid performance for model inference, even with older gear.
- A dual RTX 2060 setup showcased stable handling of bigger models, debunking the urgent need for new hardware.
- Multi-GPU Gains and LoRAs Hype: Participants observed low GPU usage (around 30%) and highlighted that extra VRAM doesn’t always provide a speed boost unless paired with enhancements like NVLink.
- They also speculated on soon-to-arrive video-generation LoRAs, though some doubted results when training on very few still images.
Stability.ai (Stable Diffusion) Discord
- Prompt Precision Powers SD: Many participants found that more descriptive prompts yield superior Stable Diffusion outputs, underscoring the advantage of thorough instructions. They tested various models, highlighting differences in style and resource usage.
- They emphasized the need for robust prompting to command better image control and suggested experimentation with models to refine the results.
- ComfyUI’s Conquerable Complexity: Contributors operated ComfyUI by symlinking models and referencing the Stability Matrix for simpler management, though many found the learning curve steep. They also shared that SwarmUI provided a more accessible interface for novices.
- Users compared less-intimidating frontends like SwarmUI to standard ComfyUI, reflecting on how these tools streamline generative art without sacrificing advanced features.
- Video Generation Gains Steam: Enthusiasts experimented with img2video models in ComfyUI, contrasting them with Veo2 and Flux for efficiency. They discovered that LTXVideo Q8 works well for setups with 8GB of VRAM.
- They remain eager to test new video generation approaches that extend resource-friendly possibilities, continuing to push boundaries on lower hardware specs.
- NSFW LoRA, Some Laughs Ensued: A playful exchange emerged over NSFW filters in LoRA, factoring in how censorship toggles may be managed. Participants wanted open discussions on each setting’s role in controlling adult content.
- They emphasized that standard LoRA constraints occasionally hamper legitimate creative tasks, prompting calls for clearer documentation on censorship toggles.
OpenAI Discord
- OpenAI Outage Incites New Options: Members encountered ChatGPT downtime, referencing OpenAI status page and weighing alternatives like DeepSeek and Claude.
- The outage felt less severe than previous incidents, but it drove renewed interest in exploring different models.
- DeepSeek V3 Dashes Ahead: Members noted DeepSeek V3 boasts a 64k context limit, outperforming GPT-4 in speed and coding consistency.
- Enthusiasts celebrated its reliable coding support, while some pointed out missing features like direct file handling and reliance on OCR.
- GPT-O3 Looms on the Horizon: A late January release for O3-mini was mentioned, raising hopes for the full O3 model soon after.
- Concrete details remain scarce, fueling speculation over its performance and possible new features.
- Acronyms Annoy LLMs: Acronym recognition sparked debate, revealing how certain models struggle to expand domain-specific abbreviations properly.
- Techniques like custom dictionaries or refined prompts were proposed to keep expansions consistent.
- Canvas & ESLint Clashes: Users encountered a canvas window that vanishes seconds after opening, halting their editing workflow.
- Others wrestled with ESLint settings under O1 Pro, aiming for a tidy config that suits advanced development needs.
Stackblitz (Bolt.new) Discord
- ProductPAPI in the Pipeline: A member teased ProductPAPI, an app developed by Gabe to simplify tasks, but withheld details like launch date, core functionality, and API structure.
- Another user said 'we need more insights to gauge its potential' and suggested referencing community feedback threads for any planned expansions.
- Anthropic's Concise Conundrum: Members reported a drop in quality when using Anthropic's concise mode on Bolt, highlighting scalability concerns at peak usage times.
- One user theorized a 'universal scaling strain' across providers, referencing the correlation when a Claude demand warning also triggered slower performance on Bolt.
- Direct Code Tweaks or Bust: Frustrated devs noted they keep receiving 'example code' instead of direct edits, urging others to specify 'please make the changes directly' in their prompts.
- They tested adding clarifying instructions in a single prompt, linking to best practices for prompt phrasing and confirming better code modifications.
- Token Tensions in Bolt: Users flagged high token consumption—some claiming to burn through 1.5 million tokens while rewriting the same code requests, citing ignored prompts and unexpected changes.
- They posted feedback on GitHub and Bolters.io to propose subscription-tier updates, with many exploring free coding on StackBlitz once token caps are hit.
- Rideshare Ambitions with Bolt: A newcomer asked if they could build a nationwide Rideshare App with Bolt, referencing their existing airport rides portal, seeking scalability and multi-region support.
- Community members cheered the idea, calling it 'a bold extension' and citing Bolters.io community guides for step-by-step checklists on expansions.
Unsloth AI (Daniel Han) Discord
- QVQ-72B Debuts Big Scores: QVQ-72B arrived in both 4-bit and 16-bit variants, hitting 70.3% on the MMMU benchmark and positioning itself as a solid visual reasoning contender (Qwen/QVQ-72B-Preview).
- Community members emphasized data formatting and careful training steps, pointing to Unsloth Documentation for model best practices.
- DeepSeek V3 Fuels MoE Buzz: The DeepSeek V3 model, featuring a Mixture of Experts configuration, drew attention for being 50x cheaper than Sonnet (deepseek-ai/DeepSeek-V3-Base).
- Some speculated about OpenAI and Anthropic employing similar techniques, sparking technical discussions on scaling and cost efficiency.
- Llama 3.2 Hits Snags with Data Mismatch: Several users struggled to fine-tune Llama 3.2 on text-only JSONL datasets, encountering unexpected checks for image data despite disabling vision layers.
- Others reported patchy performance, attributing failures to input quality over quantity, while referencing potential solutions in Unsloth's peft_utils.py.
- GGUF and CPU Load Hurdles for Trained Models: A few community members wrestled with deteriorating performance after converting Llama 3.2 models to GGUF via llama.cpp, citing prompt format mismatches.
- Others complained of strange outputs on local hardware, highlighting the need to quantize carefully and consult Unsloth Documentation for proper CPU-only setups.
- Stella Overlooked as Mixed Bread Gains Fans: A user questioned why Stella seldom gets recommended, and Mrdragonfox acknowledged not using it, suggesting it lacks broad community traction.
- Meanwhile, mixed bread models see daily use and strong support, with folks insisting benchmarking and finetuning are vital for real-world outcomes.
Perplexity AI Discord
- Gemini Gains Ground & 'o1' Rumors: Some users claimed that Gemini's Deep Research Mode outperforms Claude 3.5 Sonnet and GPT-4o in context handling and overall utility.
- Speculation arose about a new model named 'o1', prompting questions about whether Perplexity might integrate it for wider AI functionality.
- OpenRouter Embraces Perplexity's Models: After purchasing credits, a user discovered OpenRouter provides direct access to Perplexity for question answering and inference.
- Despite finding this option, the user chose to stick with another provider, highlighting a vibrant discussion about OpenRouter expansions.
- DeepSeek-V3 Stirs Strong Impressions: A mention of DeepSeek-V3 indicated it's available via a web interface and an API, prompting interest in its capabilities.
- Testers described its performance as 'too strong' and hoped pricing would remain stable, comparing it positively to other installations.
- India's LeCun-Inspired Leap: A newly introduced AI model from India references Yann LeCun's ideas to enhance human-like reasoning and ethics, stirring conversation.
- Members expressed optimism about its implications, suggesting it could reshape model training and demonstrate the power of applied AI.
Latent Space Discord
- DeepSeek V3 Storms the Stage: Chinese group DeepSeek introduced a 685B-parameter model, claiming a total training cost of $5.5M with 2.6 million H800 hours and roughly 60 tokens/second throughput.
- Tweets like this one showcase superior benchmarks compared to larger budgets, with some calling it a new bar for cost efficiency.
- ChatGPT Eyes 'Infinite Memory': Rumors state ChatGPT may soon access all past chats, potentially changing how users rely on extensive conversational context.
- A tweet from Mark Kretschmann suggests this feature is imminent, prompting debates on deeper and more continuous interaction.
- Reinforcement Training Ramps LLM Reasoning: A shared YouTube video showed advanced RL approaches for refining large language models’ logic without extra overhead.
- Contributors cited verifier rewards and model-based RMs (e.g., @nrehiew_), suggesting a more structured training method.
- Anduril Partners with OpenAI: A tweet from Anduril Industries revealed a collaboration merging OpenAI models with Anduril’s defense systems.
- They aim to elevate AI-driven national security tech, causing fresh debates on ethical and practical boundaries in the military domain.
- 2024 & 2025: Synthetic Data, Agents, and Summits: Graham Neubig offered a keynote on agents in 2024, while Loubna Ben Allal reviewed papers on Synthetic Data and Smol Models.
- Meanwhile, the AI Engineer Summit is slated for 2025 in NYC, with an events calendar available for those following industry gatherings.
Interconnects (Nathan Lambert) Discord
- DeepSeek’s Daring V3 Debut: DeepSeek unveiled V3 trained on 14.8 trillion tokens, boasting 60 tokens/second (3x faster than V2) and fully open-source on Hugging Face.
- Discussion highlighted Multi-Token Prediction, new reward modeling, and questions on critique efficiency, with members noting it outperforms many open-source models.
- Magnitude 685B: DeepSeek’s Next Big Bet: Rumors swirl about a 685B LLM from DeepSeek possibly dropping on Christmas Day, supposedly over 700GB with no listed license yet, as hinted by a tweet.
- Community members joked about overshadowing existing solutions and expressed curiosity about open-source viability without any clear license noted in the repo.
- MCTS Magic for Better Reasoning: A recent paper (arXiv:2405.00451) showcases Monte Carlo Tree Search (MCTS) plus iterative preference learning to boost reasoning in LLMs.
- It integrates outcome validation and Direct Preference Optimization for on-policy refinements, tested on arithmetic and commonsense tasks.
- DPO vs PPO: The Rivalry Rages: A CMU RL seminar explored DPO vs PPO optimizations for LLMs, hinting at robust ways to handle clip/delta constraints and PRM biases in practice.
- Attendees debated if DPO outperforms PPO, with a paper heading to ICML 2024 and a YouTube session fueling further curiosity.
GPU MODE Discord
- DeepSeek-V3 Deals a Double Punch: The DeepSeek-V3 documentation highlights mass scale FP8 mixed precision training, claiming a cost reduction by 2 orders of magnitude.
- Community members debated the project's funding and quality tradeoffs, but recognized the potential for big savings in HPC workloads.
- Triton Trips on FP8 to BF16: A casting issue from fp8 to bf16 on SM89 leads to ptx errors, covered in Triton’s GitHub Issue #5491.
- Developers proposed using
.to(tl.float32).to(tl.bfloat16)
plus a dummy op to prevent fusion while addressing the ptx error.
- Developers proposed using
- TMA Triumph Over cp.async: Users explained that TMA outperforms cp.async for GEMM on Hopper (H100), thanks to the higher flops on H100.
- They highlighted async support, bulk scheduling, and bounds checks as crucial features that reduce register usage in HPC kernels.
- No-Backprop Approach Sparks 128 Forward Passes: A new training method claims it can avoid backprop or momentum by taking 128 forward passes to estimate the gradient with low cos similarity to the true gradient.
- Though it promises 97% energy savings, many engineers worry about its practicality beyond small demonstration setups.
- ARC-AGI-2 & 1D Task Generators: Researchers gathered resources for ARC-AGI-2 experiments in a shared GitHub repository, inviting community-driven exploration.
- They also showcased 1D task generators that may extend into 2D symbolic reasoning, stimulating broader interest in puzzle-based AI tasks.
Notebook LM Discord Discord
- Podcast Partnerships with Google News: Members proposed integrating Google News with AI-generated podcast content to summarize the top 10 stories in short or long segments, sparking interest in interactive Q&A. They reported rising engagement from listeners intrigued by a dynamic blend of news delivery and on-demand discussions.
- Several participants shared examples of comedic bits woven into real-time updates, reflecting how AI-driven podcasts might keep audiences entertained while staying informed.
- AI Chatters About Life’s Biggest Questions: A user showcased an AI-generated podcast that playfully addressed philosophy, describing it as 'smurf-tastic banter' in a refreshing twist. This format combined humor with reflective conversation, hinting at a broader appeal for audiences who relish intellectual fun.
- Others called it a lively alternative to standard talk radio, highlighting how natural-sounding AI voices can both amuse and prompt deeper thought.
- Pathfinder in 15 Minutes: A participant generated a concise 15-minute podcast summarizing a 6-book Pathfinder 2 campaign, giving game masters a rapid-fire plot overview. They balanced storyline highlights with relevant tips, enabling swift immersion in the tabletop content.
- This approach stirred excitement around short-form tabletop recaps, signaling potential synergy between AI-led storytelling and roleplaying communities.
- Akas Bridges AI Podcasters and Their Audiences: An enthusiast introduced Akas, a website for sharing AI-generated podcasts and publishing personalized RSS feeds, as seen on Akas: share AI generated podcasts. They positioned it as a smooth connection between AI-driven shows and each host’s individual voice, bridging creative ideas to larger audiences.
- Some predicted expansions that unify tools like NotebookLM, encouraging user-driven AI episodes to reach broader platforms and spark further collaboration.
LlamaIndex Discord
- Report-Ready: LlamaParse Powers Agentic Workflows: Someone posted a new Report Generation Agent that uses LlamaParse and LlamaCloud to build formatted reports from PDF research papers, referencing a demonstration video.
- They highlighted the method's potential to automate multi-paper analysis, offering robust integration with input templates.
- DocumentContextExtractor Slices Costs: A conversation centered on using DocumentContextExtractor for batch processing to cut expenses by 50%, allowing users to process tasks off-hours.
- This approach spares the need to keep Python scripts running, letting individuals review results whenever they choose.
- Tokenization Tangle in LlamaIndex: Participants criticized the LlamaIndex tokenizer for lacking decoding support, prompting disappointment over the partial feature set.
- Though chunk splitting and size management were recommended, some teased the idea of dropping truncation entirely to blame users for massive file submissions.
- Unstructured RAG Steps Up: A blog detailed how Unstructured RAG, built with LangChain and Unstructured IO, handles data like images and tables more effectively than older retrieval systems, referencing this guide.
- It also described using FAISS for PDF embeddings and suggested an Athina AI evaluation strategy to ensure RAG accuracy in real-world settings.
- LlamaIndex Docs and Payroll PDFs: Some look for ways to get LlamaIndex documentation in PDF and markdown, while others struggle to parse payroll PDFs using LlamaParse premium mode.
- Discussions concluded that generating these docs is feasible, and LlamaParse can handle payroll tasks if fully configured.
Eleuther Discord
- Optimizers on the Loose: Some discovered missing optimizer states in Hugging Face checkpoints, stirring questions about checkpoint completeness.
- Others confirmed that the checkpointing code ordinarily saves these states, leaving the real cause uncertain.
- VLM Fine-Tuning Frenzy: Engineers grappled with model-specific details for LLaVA, Qwen-VL, and InternVL finetuning scripts, noting that each approach differs.
- They shared LLaVA as a popular reference, emphasizing that hugging the correct methodology matters for results.
- Chasing Lower Latency: Participants compiled a range of methods targeting CUDA- or Triton-level optimizations to trim LLM inference times.
- They also pointed to progress in open-source solutions that sometimes beat GPT-4 in tasks like function calling.
- GPT-2’s Shocking First Token: In GPT-2, the initial token’s activations soared to around 3000, unlike the typical 100 for subsequent tokens.
- Debate continued over whether a BOS token even exists in GPT-2, with some asserting it’s simply omitted by default.
- EVE Sparks Encoder-Free Curiosity: Researchers explored EVE, a video-focused encoder-free vision-language model that sidesteps CLIP-style architectures.
- Meanwhile, the Fuyu model series faced doubts about practical performance gains, prompting calls for additional insights on encoder efficiency.
tinygrad (George Hotz) Discord
- Lean Bounty and the BITCAST Quest: Members tackled the Lean bounty proof challenges, referencing tinygrad notes for guidance. They also debated implementing BITCAST const folding to optimize compile time.
- A question was raised about interest in implementing BITCAST const folding for compile-time optimization, and a user asked which directory held relevant code. Another user suggested referencing older PRs for examples on how to proceed.
- Tinygrad vs. PyTorch Face-Off: Some reported that Tinygrad took 800ms versus PyTorch's 17ms for a forward pass on CUDA, prompting improvement attempts with jitting. Community members anticipated concurrency gains from beam search and repeated that a stable approach could match or exceed PyTorch speeds.
- They acknowledged speed disparities likely stemmed from different CUDA setups and system configurations. A few participants suggested intensifying jitting efforts to shrink the performance gap.
- Rewrite Rumble in Matching Engines: Participants explored matching engine performance bounties, with links to open issues at tinygrad/tinygrad#4878.
- One user clarified their focus on the rewrite portion, referencing outdated PRs that still guided the direction of proposed solutions.
- Input Handling Hiccups: One user flagged input tensor recreation in loops that severely slowed Tinygrad while producing correct outputs, also hitting attribute errors with the CUDA allocator. In response, changes from PR #8309 were merged to fix these issues, underscoring the importance of regression tests for stable performance.
- A deeper dive revealed that
tiny_input.clone()
triggered errors in the CUDA memory allocator. Contributors agreed more testing was needed to prevent regression in loop-based input creation.
- A deeper dive revealed that
- GPU Gains with Kernel Caching: Chat highlighted an RTX 4070 GPU with driver version 535.183.01, using CUDA 12.2, raising open-source driver concerns. Discussions on beam search caching confirmed kernels are reused for speed, with the prospect of sharing those caches across similar systems.
- Attendees surmised potential driver mismatches might limit performance, urging debug logs to confirm. Some suggested distributing compiled beam search kernels to expedite setups on matching hardware.
Cohere Discord
- Riding the CMD-R & R7B Rollercoaster: Members debated upcoming changes to CMD-R and showed curiosity about the 'two ans' quirk in R7B, referencing a shared image that hints at unexpected updates.
- They joked about how rarely such weird outcomes appear, with some calling it 'a comedic glitch worth investigating' in the community discussion.
- Pinching Pennies on Re-ranker Pricing: The Re-ranker cost structure drew attention, especially the $2.50 for input and $10.00 for output per 1M tokens, as shown in Cohere's pricing page.
- Questions spurred interest in how teams might budget for heavy usage, with some folks comparing this to alternative solutions.
- LLM University Gains Ground: Cohere introduced LLM University, offering specialized courses for NLP and LLMs, aiming to bolster enterprise AI expertise.
- Attendees gave enthusiastic feedback, praising the well-structured resources and noting that users can adopt these materials for quick skill expansion.
- Command R & R+ Reign Supreme in Multi-step Tasks: Command R provides 128,000-token context capacity and efficient RAG performance, while Command R+ showcases top-tier multi-step tool usage.
- Participants credited its multilingual coverage (10 languages) and advanced training details, especially for challenging production demands in the cmd-r-bot channel.
- Voice, VAD & Music Merge AI Magic: An AI Engineer showcased a Voice to Voice chat app leveraging DNN-VAD and also shared music generation from text prompts using a stereo-melody-large model.
- They invited collaborators, stating 'I would like to work with you,' and extended a Merry Christmas greeting to keep the atmosphere upbeat.
Modular (Mojo 🔥) Discord
- io_uring Intrigue & Network Nudges: Members explored how io_uring can enhance networking performance, referencing man pages as a starting point and acknowledging limited familiarity.
- Community speculation suggests io_uring might streamline asynchronous I/O, with calls for real-world benchmarks to confirm its synergy.
- StringRef Shenanigans & Negative Length Crash: A crash occurred when StringRef() received a negative length, pointing to a missing length check in memcpy.
- One user recommended StringSlice instead, emphasizing StringRef’s risk when dealing with length validation.
- EOF Testing & Copyable Critique: Users confirmed read_until_delimiter triggers EOF correctly, referencing GitHub commits.
- Conversations highlighted Copyable and ExplicitlyCopyable traits, with potential design adjustments surfacing on the Modular forum.
- Mojo Swag & Modular Merch Frenzy: Members flaunted their Mojo swag, expressing gratitude for overseas shipping and sharing photos of brand-new gear.
- Others praised Modular’s merchandise for T-shirt quality and 'hardcore' sticker designs, fueling further brand excitement.
- Modular Kernel Queries & MAX vs XLA: A user inquired about a dedicated kernel for the modular stack, hinting at possible performance refinements.
- MAX was compared to XLA, citing 'bad compile times with JAX' as a reason to consider alternative compiler strategies.
DSPy Discord
- PyN8N Gains Node Wiz: Enthusiasts noted that PyN8N integrates AI to build custom workflows, though some users reported loading issues linked to ad blockers.
- They highlighted the README’s aspirational tone and recommended switching browsers or disabling extensions to resolve these site-blocking errors.
- DSpy Dances with DSLModel: Community members found that DSpy extends functionality via DSLModel, allowing advanced features for better performance.
- They suggested this approach reduces code overhead while keeping complex data workflows streamlined.
- NotebookLM Inline Sourcing Sparks Curiosity: A user asked how NotebookLM accomplishes inline sourcing, noting a lack of detailed responses.
- They sought more insight into the underlying implementation but the conversation provided limited follow-up.
- Jekyll Glossary Gets a DSPy Boost: A Jekyll script was shared to generate a glossary of key terms, using DSPy for LLM interactions.
- They refined entries like Artificial General Intelligence and noted potential improvements for the long description parameter.
- Typing.TypedDict & pydantic Tangle: Members discovered
typing.TypedDict
for typed fields, acknowledging its complexity for Python use cases.- They also discussed pydantic for multi-instance output arrays, aiming for a more refined layout.
LLM Agents (Berkeley MOOC) Discord
- Certificate Confusion & Strict Forms: Members confirmed that certificates will be distributed by the end of January, as noted here.
- One participant asked if they'd still receive certification without the Certificate Declaration Form, and was told it's mandatory with no exceptions.
- Spring Session Hype for LLM Agents MOOC: Community chatter revealed a spring start for the next LLM Agents course, aligning with completion of the current session.
- Attendees showed excitement, referencing the certificate process and hoping for an on-time rollout of course updates.
OpenInterpreter Discord
- Open Interpreter's Pioneering Pixel-Precision: The Open Interpreter API offers near pixel-perfect control for UI automation, includes OCR for text recognition, and provides usage examples in Python scripts.
- A community member mentioned OCR appears broken, while others inquired about the desktop version release timeline, showing broad interest in further development.
- Voice to Voice Chat & QvQ synergy: One AI engineer introduced a Voice to Voice chat app featuring Music Generation from text prompts, seeking collaboration with other Generative AI enthusiasts.
- Another user questioned how QvQ would function in OS mode for Open Interpreter, hinting at bridging speech and system-level tasks.
Nomic.ai (GPT4All) Discord
- Copy Button Conundrum: A member noticed the absence of a dedicated 'copy' button for AI-generated code in the GPT4All chat screen UI, prompting questions about possible UI improvements.
- They expressed gratitude for any workaround suggestions, emphasizing that code-copy convenience ranks high among developer requests.
- Keyboard Shortcut Shenanigans: Community members confirmed that mouse-based cut and paste do not work across chat UI or configuration pages, frustrating those relying on right-click behavior.
- They clarified that Control-C and Control-V remain functional, offering a fallback for copying code snippets.
- New Template Curiosity: A member asked in French if anyone had tried writing with the new template, indicating multilingual adoption outside of English contexts.
- They hoped for feedback on post-install steps, though no specific outcomes or shared examples emerged from the exchange.
LAION Discord
- Mega Audio Chunks for TTS: A member sought tips for building a TTS dataset from massive, hour-long audio files and asked about tools to split them properly.
- They aimed for a method that maintains quality while reducing manual labor, focusing on audio segmentation approaches that handle large file sizes.
- Whisper Splits the Script: Another participant proposed Whisper for sentence-level splitting, seeing it as a practical way to prepare audio for TTS tasks.
- They highlighted how Whisper streamlines segmentation, reducing production time while preserving consistent sentence boundaries.
MLOps @Chipro Discord
- HPC MLOps frameworks in the spotlight: A member requested a stable and cost-effective ML ops framework for HPC requiring minimal overhead and singled out Guild AI as a possibility.
- They questioned Guild AI's reliability and leaned toward a self-hosted approach, citing distaste for SaaS solutions.
- Server chores spark talk of a DIY ops tool: Mounting setup and maintenance burdens made them wary of running a dedicated server for MLOps tasks.
- They voiced willingness to code a simple ops framework themselves if it avoids heavy server administration.
The Axolotl AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Torchtune Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The HuggingFace Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Codeium (Windsurf) â–· #content (1 messages):
Windsurf AI
- Engineers Share Windsurf Creation Insights: A new video features our brilliant engineers discussing the innovative strategies behind the creation of Windsurf. They explain how they managed to break every industry convention in the process.
- You can watch the full video on Windsurf's Twitter.
- Happy Holidays Message: The announcement included warm holiday wishes from the Windsurf team, celebrating the festive season with a positive note. This gesture emphasizes community spirit during the holiday period.
Link mentioned: Tweet from Windsurf (@windsurf_ai): What exactly is Windsurf? Watch how we dared to innovate by breaking every industry convention 🌊
Codeium (Windsurf) ▷ #discussion (433 messages🔥🔥🔥):
Windsurf Performance Issues, Cascade Base Model Concerns, Integration with Remote Hosts, User Experience Feedback, Login and API Errors
- Windsurf Faces Performance Issues: Many users reported experiencing errors and unresponsive behavior in Windsurf, especially after the recent update, with messages often resulting in no response or errors like 'Error generating response'.
- Some users noted that even after announcement of improvements, they continued to face issues with the IDE, leading to frustration, particularly for those on a PRO plan.
- Concerns About Cascade Base Model: Several users expressed dissatisfaction with the Cascade Base model, finding it inadequate for complex coding tasks compared to Claude 3.5 Sonnet.
- While some users claimed improvements after adding global rules, others saw no noticeable enhancements and felt the base model was becoming increasingly difficult to rely on.
- Integration Problems with Remote Hosts: Users connecting to SSH Remote Hosts experienced significant delays in performance, with actions executing but visually lagging until Cascade updated.
- This leads to a confusing workflow, as users found that despite commands executing correctly, the interface did not reflect changes promptly.
- User Experience Feedback on Interface: Several users reported issues with clicking into chat history, with some finding they could only access previously sent messages via keyboard navigation instead of direct clicks.
- This suggests a persistent interface bug that is frustrating users and hampering productivity within the application.
- Community Engagement and Support Requests: Users discussed the importance of filing support tickets for ongoing issues and highlighted their frustrations with being unable to use the IDE effectively after investing heavily in it.
- The community suggested that improvements to communication channels regarding updates and outages would greatly enhance the user experience.
- no title found: no description found
- no title found: no description found
- no title found: no description found
- Spitting Coffee Fbi Agent GIF - Spitting Coffee Fbi Agent - Discover & Share GIFs: Click to view the GIF
- Youre Kidding Kyle Broflovski GIF - Youre Kidding Kyle Broflovski Sheila Broflovski - Discover & Share GIFs: Click to view the GIF
- no title found: no description found
- Support | Windsurf Editor and Codeium extensions: Need help? Contact our support team for personalized assistance.
- Cascade Memories: Personalize Windsurf with Custom Rules: Set it and forget it. Cascade Memories lets you create custom rules and applies them automatically, saving you time and keeping your workflow smooth.Download...
- Support | Windsurf Editor and Codeium extensions: Need help? Contact our support team for personalized assistance.
- GitHub - SchneiderSam/awesome-windsurfrules: 📄 A curated list of awesome global_rules.md and .windsurfrules files: 📄 A curated list of awesome global_rules.md and .windsurfrules files - SchneiderSam/awesome-windsurfrules
- GitHub - ichoosetoaccept/awesome-windsurf: A collection of awesome resources for working with the Windsurf code editor: A collection of awesome resources for working with the Windsurf code editor - ichoosetoaccept/awesome-windsurf
Codeium (Windsurf) ▷ #windsurf (869 messages🔥🔥🔥):
Windsurf Issues, Cascade Performance, User Experiences, AI Model Performance, Project Development Challenges
- Windsurf Faces Repeated Outages: Many users reported ongoing issues with Windsurf, experiencing slow responses or complete lack of interaction from Cascade.
- Users expressed frustration over the consumption of tokens without receiving adequate outputs, prompting discussions about the stability of the platform.
- Struggles With Cascade's Performance: Several users criticized Cascade Base for its inability to follow prompts correctly, often misunderstanding simple commands like 'git commit'.
- This led to significant frustration, especially after spending considerable time working with the assistant without it meeting their expectations.
- Concerns Regarding Credit Consumption: The conversation revealed a common concern about losing credits for unresponsive queries, leaving users feeling unsatisfied with the service's dependency on tokens.
- Individuals expressed disappointment over the efficiency of Cascade, particularly in handling large codebases and making contextually accurate suggestions.
- User Experiences and Suggestions: Users shared experiences with using alternative tools, expressing hope for improvements and increased effectiveness from Windsurf and Cascade.
- Some users outlined methods to streamline their interactions, emphasizing the need for more precise commands and better performance from the AI.
- Learning Curves and Future Development: Many users acknowledged the steep learning curve associated with using Windsurf effectively, especially for larger projects.
- Despite the frustrations, some users remained optimistic about Windsurf's potential to assist in their development processes with continued improvements.
- iAircon - Easy Aircon Booking: no description found
- Dr. Desenrola: no description found
- Gjirlfriend GIF - Gjirlfriend - Discover & Share GIFs: Click to view the GIF
- Uhh Cat GIF - Uhh Cat Meow - Discover & Share GIFs: Click to view the GIF
- Bye Okay GIF - Bye Okay Slide - Discover & Share GIFs: Click to view the GIF
- Popcat Wen GIF - Popcat Wen - Discover & Share GIFs: Click to view the GIF
- This Fine GIF - This Is Fine - Discover & Share GIFs: Click to view the GIF
- Speak To The Manager GIF - Supervisor Speak To The Manager Manager - Discover & Share GIFs: Click to view the GIF
- Yifan - Beyond the Hype: Insights and rants from a computer scientist turned tech founder.
- Is Claude Down? Live Status & Performance Monitor - Helicone: Check if Claude or Anthropic API is working. Live status monitoring, current outages, API availability, and performance metrics for Claude 3.5 Sonnet, Claude 3 Opus, Claude 2.1, and Claude Instant. Re...
- Contemplating Thinking GIF - Contemplating Thinking Eating - Discover & Share GIFs: Click to view the GIF
- Luxury Virtual Market Game (forked): no description found
- Reaction Thinking GIF - Reaction Thinking Idk - Discover & Share GIFs: Click to view the GIF
- Daria Monke GIF - Daria Monke Orangutan - Discover & Share GIFs: Click to view the GIF
- Karen Being A Karen GIF - Karen Being A Karen Calling The Police - Discover & Share GIFs: Click to view the GIF
- Snoopy Snoopy And Woodstock GIF - Snoopy Snoopy and woodstock Woodstock - Discover & Share GIFs: Click to view the GIF
- Balthazar Crazy GIF - Balthazar Crazy Family guy - Discover & Share GIFs: Click to view the GIF
- Viva La GIF - Viva La Dirt - Discover & Share GIFs: Click to view the GIF
- Goodbye Died GIF - Goodbye Died - Discover & Share GIFs: Click to view the GIF
- Meme Loafcat GIF - MEME LOAFCAT CRYPTO - Discover & Share GIFs: Click to view the GIF
- Contact | Windsurf Editor and Codeium extensions: Contact the Codeium team for support and to learn more about our enterprise offering.
- Your App Name: no description found
- OpenAI Status: no description found
- Jeeks Balou GIF - Jeeks Balou Guigui - Discover & Share GIFs: Click to view the GIF
- Support | Windsurf Editor and Codeium extensions: Need help? Contact our support team for personalized assistance.
- Lovable: Build software products, using only a chat interface
- - YouTube: no description found
- GitHub - shanalikhan/code-settings-sync: 🌴💪 Synchronize your Visual Studio Code Settings Across Multiple Machines using GitHub GIST 💪🌴: 🌴💪 Synchronize your Visual Studio Code Settings Across Multiple Machines using GitHub GIST 💪🌴 - shanalikhan/code-settings-sync
- GitHub - bungrudi/mikkadb: Contribute to bungrudi/mikkadb development by creating an account on GitHub.
Cursor IDE ▷ #general (744 messages🔥🔥🔥):
DeepSeek V3 performance, Cursor IDE and DeepSeek integration, Agent mode and token consumption, Challenges with UI design in Next.js, Comparison of different AI models
- DeepSeek V3 impresses users: Users are finding DeepSeek V3 to be efficient and comparable to Sonnet 3.5, praising its ability to handle tasks with minimal effort and lower costs.
- Its capabilities in generating code and conducting analysis have led to discussions about its potential integration within Cursor IDE.
- Cursor IDE's token limitations: Users noted that the context window in Cursor IDE is limited, which can affect performance when generating code or analyzing large projects.
- This has prompted discussions on how different models, like DeepSeek and Cline, handle tokens and context more efficiently compared to Cursor.
- UI design struggles in Next.js: Hackers expressed frustrations with UI design when using Next.js, highlighting difficulties encountered while utilizing Claude for design tasks.
- Recommendations included utilizing specific libraries and components, such as shadcn, and embedding documentation into context for better results.
- Challenges in model-specific features: There were mentions of how different AI models interact with code and how feature implementation varies based on what the models were trained on.
- The conversation highlighted the importance of using the correct version of frameworks and the potential benefits of using embeddings or RAGs for improved performance.
- OpenAI's fluctuating performance: Overall concerns were raised about OpenAI's reliability following recent performance issues, with users highlighting the improvements offered by alternative models.
- Some advocated for testing multiple models in tandem to strike a balance between performance and cost-effectiveness.
- Alumae Alumaeyy GIF - Alumae Alumaeyy Gubby - Discover & Share GIFs: Click to view the GIF
- Downloads | Cursor - The AI Code Editor: Choose your platform to download the latest version of Cursor.
- DeepSeek Platform: Join DeepSeek API platform to access our AI models, developer resources and API documentation.
- Tweet from DeepSeek (@deepseek_ai): 🚀 Introducing DeepSeek-V3!Biggest leap forward yet:⚡ 60 tokens/second (3x faster than V2!)💪 Enhanced capabilities🛠API compatibility intact� Fully open-source models & papers� 1/n
- DeepSeek Platform: Join DeepSeek API platform to access our AI models, developer resources and API documentation.
- Cloud Email Sending Service - Amazon Simple Email Service - AWS: no description found
- Cursor - Community Forum: A place to discuss Cursor (bugs, feedback, ideas, etc.)
- DeepSeek Platform: Join DeepSeek API platform to access our AI models, developer resources and API documentation.
- 5685 UI elements: CSS & Tailwind: no description found
- Your First API Call | DeepSeek API Docs: The DeepSeek API uses an API format compatible with OpenAI. By modifying the configuration, you can use the OpenAI SDK or softwares compatible with the OpenAI API to access the DeepSeek API.
- Cursor Directory: Find the best cursor rules for your framework and language
aider (Paul Gauthier) â–· #announcements (1 messages):
Aider v0.70.0 release, Analytics opt-in feature, Error handling improvements, Model support enhancements
- Aider v0.70.0 Fully Supports o1 Models: The latest release of Aider v0.70.0 now includes full support for o1 models and new install methods via uv, allowing for one-liner installations.
- Additionally, it improves watch files functionality by honoring
--subtree-only
and includes improved prompting for better model reliability.
- Additionally, it improves watch files functionality by honoring
- Analytics Opt-in Feature Rolled Out: Aider plans to request 10% of users to opt-in to analytics to enhance functional insights.
- This move aims to gather data on user interactions and improve overall user experience.
- Better Error Handling for Interactive Commands: This release brings better error handling when using interactive commands via
/load
or--load
, enhancing user navigation.- The system now gracefully handles unicode errors in git path names to prevent disruption.
- Improved Metadata and Bug Fixes: A fix for gemini model names in model metadata has been introduced alongside a bugfix for auto-suggest, refining the tool's performance.
- These enhancements contribute to more streamlined usage and fewer interruptions.
- Aider's Contribution Noted: Aider reported it has written 74% of the code in this release based on its git commit history.
- This statistic emphasizes the tool's self-sufficiency and evolving capabilities.
Link mentioned: Release history: Release notes and stats on aider writing its own code.
aider (Paul Gauthier) ▷ #general (557 messages🔥🔥🔥):
DeepSeek V3 vs O1 Pro, Model Comparisons: Claude vs DeepSeek, Using Aider with DeepSeek, Challenges in Code Implementation, Context Limitations in LLMs
- DeepSeek V3 Performance Compared to O1 Pro: Users have noted that DeepSeek V3 is three times faster than its predecessor and shows promising capabilities in coding tasks, outperforming Sonnet 3.5 in some scenarios.
- Many are exploring the potential of using DeepSeek for a variety of programming tasks and are generally pleased with its performance despite some limitations.
- Model Comparisons: Claude vs DeepSeek: There is skepticism about the current performance of Claude's models, especially with recent releases like the 3.5 Haiku, which some users find lacking compared to alternatives like DeepSeek V3.
- DeepSeek's ability to deliver full file outputs in certain modes has garnered positive feedback, though slow response times remain a drawback.
- Implementing Changes with Aider and DeepSeek: Users are looking to leverage Aider's capabilities alongside DeepSeek to automate and implement code changes more efficiently, seeking a synergistic workflow between the two models.
- There is hope for improved autonomous functions and better context learning capabilities in future updates of Aider and DeepSeek.
- Challenges in Code Implementation: Concerns were raised about DeepSeek's limitations in handling complex programming tasks and maintaining context during longer coding sessions.
- Users expressed a desire for models that can better grasp large codebases and deliver comprehensive updates or refactorings without constant manual intervention.
- Context Limitations in LLMs: DeepSeek V3 has a context limit of 64k tokens, leading to frustration when handling verbose documentation or complex codebases.
- This limitation has spurred conversations about the need for models that can seamlessly manage larger contexts while providing meaningful, contextual responses.
- Tweet from DeepSeek (@deepseek_ai): 🚀 Introducing DeepSeek-V3!Biggest leap forward yet:⚡ 60 tokens/second (3x faster than V2!)💪 Enhanced capabilities🛠API compatibility intact� Fully open-source models & papers� 1/n
- Tweet from Alex Cheema - e/acc (@alexocheema): 2 MacBooks is all you need.Llama 3.1 405B running distributed across 2 MacBooks using @exolabs_ home AI cluster
- Aider LLM Leaderboards: Quantitative benchmarks of LLM code editing skill.
- Im The Captain Now Im The Boss GIF - Im The Captain Now Im The Boss Captain - Discover & Share GIFs: Click to view the GIF
- DeArrow - A Browser Extension for Better Titles and Thumbnails: DeArrow is a browser extension for replacing titles and thumbnails on YouTube with community created accurate versions. No more clickbait.
- Agentic Engineer - Build LIVING software: Build LIVING software. Your guide to mastering prompts, prompt chains, ai agents, and agentic workflows.
- BigCodeBench Leaderboard - a Hugging Face Space by bigcode: no description found
- Scripting aider: You can script aider via the command line or python.
- Genius Think GIF - Genius Think Be Clever - Discover & Share GIFs: Click to view the GIF
- Aider is SOTA for both SWE Bench and SWE Bench Lite: Aider sets SOTA for the main SWE Bench, after recently setting SOTA for the Lite version.
- FAQ: Frequently asked questions about aider.
- Qwen/QVQ-72B-Preview · Hugging Face: no description found
- o1 tops aider’s new polyglot leaderboard: o1 scores the top result on aider’s new multi-language, more challenging coding benchmark.
- Tweet from Ivan Fioravanti ᯅ (@ivanfioravanti): Thunderbolt connection between M2 Ultra and 2 M4 Max with exo by @exolabs Let's make some tests with llama 3.2 405B!
- Reddit - Dive into anything: no description found
- DeepSeek Service Status: no description found
- - YouTube: no description found
- deepseek-ai/DeepSeek-V3-Base · Hugging Face: no description found
- - YouTube: no description found
- - YouTube: no description found
- - YouTube: no description found
- Mac mini: Mac mini with the M4 and M4 Pro chips. Built for Apple Intelligence. With front and back ports. Financing options available. Buy now from apple.com.
- Mac mini: Mac mini with the M4 and M4 Pro chips. Built for Apple Intelligence. With front and back ports. Financing options available. Buy now from apple.com.
- GitHub - richardanaya/colossus: A realtime voice AI tool for controlling aider: A realtime voice AI tool for controlling aider. Contribute to richardanaya/colossus development by creating an account on GitHub.
- GitHub - robert-at-pretension-io/mcp: code: code. Contribute to robert-at-pretension-io/mcp development by creating an account on GitHub.
- GitHub - exo-explore/exo: Run your own AI cluster at home with everyday devices 📱💻 🖥�⌚: Run your own AI cluster at home with everyday devices 📱💻 🖥�⌚ - exo-explore/exo
- Amazon.com: Nvidia Tesla K80 : Electronics: no description found
aider (Paul Gauthier) ▷ #questions-and-tips (49 messages🔥):
Aider configuration with aliases, DeepSeek Chat V3 performance, Repo-map functionality, Model combinations in Aider, Managing API keys in config files
- Challenges with Aider alias configurations: Users faced difficulties when attempting to set up model aliases in the .env file, which did not work as expected.
- Some suggested using a YML config file instead, which can handle multiple aliases more effectively.
- DeepSeek Chat V3 Performance Insights: Participants noted that DeepSeek Chat V3 is performing well on the polyglot leaderboard and may replace Sonnet as a go-to model due to its pricing.
- One user recommended using DeepSeek V3 alongside Gemini exp 1206, claiming it offered good results for feature development.
- Understanding repo-map functionality: A user inquired about the repo-map feature, which updates slowly for large repositories when switched to specific models.
- Another user suggested using the --map-refresh manual command to streamline updates instead of automatic refresh.
- Best Model Combinations in Architect Mode: Discussion on the optimal model combinations for Aider leaned towards using O1 or Gemini, mentioning DeepSeek as a viable option.
- Feedback indicated that users experienced some struggles with complex tasks, like creating specific function presets, alongside ease of use and cost efficiency.
- Managing API keys for security: A new user asked about security implications of committing Aider config files without API keys included.
- It was advised to separate API keys in a .env file to keep sensitive information local, while the config file could be included in the repository.
- no title found: no description found
- Whatchu Talkin About Willis Arnold Jackson GIF - Whatchu Talkin About Willis Arnold Jackson Diffrent Strokes - Discover & Share GIFs: Click to view the GIF
- Using /help: Use “/help " to ask for help about using aider, customizing settings, troubleshooting, using LLMs, etc.
- FAQ: Frequently asked questions about aider.
- Model warnings: aider is AI pair programming in your terminal
- Aider LLM Leaderboards: Quantitative benchmarks of LLM code editing skill.
- Reddit - Dive into anything: no description found
- Show absolute path for files far outside the directory. by apaz-cli · Pull Request #2702 · Aider-AI/aider: Before:After:
- Models & Pricing | DeepSeek API Docs: The prices listed below are in unites of per 1M tokens. A token, the smallest unit of text that the model recognizes, can be a word, a number, or even a punctuation mark. We will bill based on the tot...
aider (Paul Gauthier) â–· #links (2 messages):
BigCodeBench Leaderboard, GitDiagram for Visualization, GitIngest for Codebases
- BigCodeBench evaluates LLMs: The BigCodeBench Leaderboard assesses LLMs with practical and challenging programming tasks, utilizing its v0.1.0 version for evaluation.
- They provide multiple resources including the GitHub repo, Leaderboard, and a relevant arXiv paper.
- GitDiagram turns GitHub repos into diagrams: GitDiagram allows users to convert any GitHub repository into an interactive visualization, enhancing understanding of project structure swiftly.
- Just replace 'hub' with 'diagram' in any GitHub URL to use this tool; it’s recommended to try with various repositories for demonstration.
- GitIngest simplifies codebase ingestion: GitIngest transforms any Git repository into a plain text representation, which is useful for feeding a codebase into any LLM.
- Much like GitDiagram, you can swap 'hub' with 'ingest' in any GitHub URL to access this functionality effectively.
- BigCodeBench Leaderboard: no description found
- GitDiagram - Repository to Diagram in Seconds: Turn any GitHub repository into an interactive diagram for visualization.
- Git ingest: Replace 'hub' with 'ingest' in any Github Url for a prompt-friendly text
Nous Research AI ▷ #general (324 messages🔥🔥):
DeepSeek V3 Release, Linux Mint Experience, Text-to-Video Model Comparisons, URL Moderation API Challenges, Inference Costing and Deployment
- DeepSeek V3 Launches with Impressive Specs: DeepSeek V3 was released, showcasing 685 billion parameters and is touted as one of the largest open-weight models, with varying VRAM requirements for deployment.
- Discussions highlighted the high resource demands for effective model inference, with recommendations of 320 GPUs like H100s for optimal performance.
- Exploring Linux Mint – A Game Changer: Users shared their excitement about transitioning to Linux Mint, with experiences on bare metal versus virtual machines being particularly noteworthy for resource efficiency.
- Installing Linux provided an engaging learning experience for many members who appreciated the OS's lightweight nature and command-line capabilities.
- Debate on Text-to-Video Models' Performance: There was a comparison of text-to-video models, particularly the Hunyuan and LTX models, pointing out their usability based on hardware specifications like VRAM.
- Users expressed interest in the latest T2V models and shared insights about ease of use and performance limitations, particularly regarding resource-intensive tasks.
- Challenges in Building a URL Moderation API: An AI engineer discussed difficulties with AI models for developing a URL moderation API that accurately categorizes unsafe sites across various dimensions without hallucinations.
- Attempts with different models yielded insufficient performance, particularly with OpenAI's models frequently denying assistance and Llama struggling with structured output.
- Understanding Inference Costs and Model Deployment: Members analyzed the cost structure of deploying AI models, debating over the efficiency of current hosting solutions and promotional pricing strategies.
- Despite skepticism about pricing sustainability, it was stated that sufficient regular load could offset operational costs effectively, making high-performance AI services more accessible.
- Music Bot - a Hugging Face Space by discord-community: no description found
- Hackerman GIF - Hacker Hackerman Kung Fury - Discover & Share GIFs: Click to view the GIF
- - YouTube: no description found
- deepseek-ai/DeepSeek-V3-Base at main: no description found
- switch from `training_args.bin` `training_args.json` by not-lain · Pull Request #35010 · huggingface/transformers: What does this PR do?switch from training_args.bin to training_args.json and only capture the parameters that the user passedI'm using the same approach we are using in huggingface_hub...
Nous Research AI â–· #ask-about-llms (2 messages):
NotebookLM Inline Sourcing
- Inquiry on NotebookLM Inline Sourcing: A member initiated a discussion asking how NotebookLM inline sourcing functions.
- This indicates a growing interest in the specific mechanics behind sourcing within the NotebookLM environment.
- Quest for Knowledge on NotebookLM: The channel witnessed a call for clarification regarding NotebookLM functionalities.
- Members expressed curiosity over how inline sourcing operates, reflecting a desire for deeper understanding.
Nous Research AI â–· #research-papers (2 messages):
Differentiable Cache Augmentation, DeepSeek-V3
- Differentiable Cache Augmentation improves LLM thinking: Research shows that augmenting a frozen LLM with an offline coprocessor operating on its key-value (kv) cache enhances its ability to generate and attend to intermediate reasoning steps, thus reducing latency costs.
- Experiments revealed that when the cache is augmented, the decoder achieves lower perplexity and better performance across various reasoning-intensive tasks, even without task-specific training.
- Explore DeepSeek-V3 on GitHub: The project DeepSeek-V3 is actively developed and can be explored on GitHub, as detailed in the DeepSeek_V3.pdf.
- This PDF offers insights into the project's development and capabilities, enhancing community participation and contributions.
- Paper page - Deliberation in Latent Space via Differentiable Cache Augmentation: no description found
- DeepSeek-V3/DeepSeek_V3.pdf at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
Nous Research AI â–· #research-papers (2 messages):
Differentiable Cache Augmentation, DeepSeek V3
- Enhancing LLMs with Differentiable Cache: Research demonstrates a method for augmenting a frozen LLM with an offline coprocessor that operates on the model's key-value (kv) cache to reduce latency and improve performance on reasoning tasks.
- Augmenting the cache consistently reduces perplexity on various tasks, as the coprocessor can operate asynchronously and the language model remains functional if it's unavailable.
- DeepSeek V3 Now Available: The latest version of DeepSeek, known as DeepSeek-V3, has been made available on GitHub, offering significant advancements in development.
- You can find more details and contribute on their GitHub repository.
- Paper page - Deliberation in Latent Space via Differentiable Cache Augmentation: no description found
- DeepSeek-V3/DeepSeek_V3.pdf at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
OpenRouter (Alex Atallah) â–· #announcements (2 messages):
Web Search for LLMs, Price Cuts on Models, New Endpoints API, Deepseek v3 Launch
- Web Search for Any LLM Debuts: The Web Search feature has launched for any language model on OpenRouter Chatroom, making it easier to obtain up-to-date information. A live demo is available at this link.
- API access will be introduced later, and the feature is free for the time being.
- Price Cuts Across Models: Significant price reductions have been implemented for several models, including qwen-2.5 which saw a 12% decrease, and hermes-3-llama-3.1-70b with a 31% cut.
- The detailed pricing updates highlight a range of models now available at lower costs.
- New Endpoints API in Beta: A new Endpoints API is now in beta, allowing users to access model details and available endpoints during a undocumented preview. This could change before the official documentation is released.
- An example of usage can be found at the API link.
- Deepseek v3 Sees Tripled Usage: Since the launch of Deepseek v3, its usage on OpenRouter has tripled with benchmarks showing competitive performance against Sonnet and GPT-4o at a lower price point. Interested users can try it without a subscription at this link.
- Notable comments emphasize that the model is viewed as a strong contender and that China has caught up in the AI space.
- Tweet from OpenRouter (@OpenRouterAI): Deepseek has tripled in usage on OpenRouter since the v3 launch yesterday.Try it yourself, w/o subscription, including web search:Quoting Anjney Midha 🇺🇸 (@AnjneyMidha) Deepseek v3 seems to be a gen...
- Tweet from OpenRouter (@OpenRouterAI): Holiday � experiment: Web Search, but for any LLM!Here's Sonnet with & without grounding:
OpenRouter (Alex Atallah) â–· #app-showcase (2 messages):
3D Game Generation Tool, AI Chat Terminal (ACT)
- Generate 3D Games with Simple Words: A new tool allows users to create a 3D game by simply describing it in words, addressing previous limitations faced with models like GPT-3/4 and Claude.
- The tool's abilities have improved significantly with o-1 and o-1 preview, promising potential for full voxel engine support to render complex shapes.
- Transform Your Terminal with AI Chat Terminal: Introducing the AI Chat Terminal (ACT) that merges agent features and codebase chatting, streamlining interactions with AI models like OpenAI and Anthropic.
- Key features include an Agent Mode for executing tasks and a multi-provider support for switching between different models efficiently. Try it now!
- AI-Generated 3D Platform Game: no description found
- GitHub - Eplisium/ai-chat-terminal: Terminal Script for OpenAI and OpenRouter API models. Let's make this a functional performing script.: Terminal Script for OpenAI and OpenRouter API models. Let's make this a functional performing script. - Eplisium/ai-chat-terminal
OpenRouter (Alex Atallah) ▷ #general (301 messages🔥🔥):
DeepSeek V3 Feedback, OpenRouter Chat Performance, DeepSeek Pricing, API Limitations, Model Comparisons
- DeepSeek V3 Reviews are Mixed: Users are discussing the performance of DeepSeek V3, noting that it seems to perform comparably to previous versions with slight improvements for specific tasks like coding.
- One user shared a poem generated by the models, highlighting the creativity of each, though some felt the output quality varied.
- OpenRouter Chat UI Experiences: Feedback was given regarding the OpenRouter chat UI, with reports of lag and performance issues when handling an extensive chat history.
- Users expressed a desire for quicker responses as the current interface becomes unwieldy with larger data sets.
- Pricing and Access to Models: Discussion around pricing for models included concerns over the costs of O1 Pro and hopeful considerations for alternatives via OpenRouter.
- Users want to avoid high monthly fees, particularly as new models like O3 are rumored to carry substantial price tags.
- Batching Requests in APIs: Conversation regarding how batching requests works focused on scheduling multiple requests for processing when GPUs are idle.
- Users noted that batching is not supported directly via OpenRouter API and emphasized the importance of request prioritization.
- Token Limitations and Model Access: Users raising concerns about access errors, like 'no endpoints found matching your data policy,' discovered misconfigured settings as the source.
- The discussion highlighted the need for clear communication on API settings to improve user experience.
- Limits | OpenRouter: Set limits on model usage
- Tweet from Ruben Kostandyan (@ruben_kostard): @paulgauthier Yes, I see it on the API as well: https://x.com/ruben_kostard/status/1871939691794350161Quoting Ruben Kostandyan (@ruben_kostard) You can verify the @deepseek_ai model being V3 on the A...
- DeepSeek | OpenRouter: Browse models from DeepSeek
- good luck have fun: no description found
- High Bandwidth Memory - Wikipedia: no description found
- Deepseek V3 - API, Providers, Stats: DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ev...
- Deepseek V3 - API, Providers, Stats: DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ev...
- Llama 3.3 | Model Cards and Prompt formats: .
- deepseek-ai/DeepSeek-V3 · Hugging Face: no description found
- Training Large Language Models to Reason in a Continuous Latent Space: no description found
- every-chatgpt-gui/README.md at main · billmei/every-chatgpt-gui: Every front-end GUI client for ChatGPT, Claude, and other LLMs - billmei/every-chatgpt-gui
- Dualformer: Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces: no description found
LM Studio ▷ #general (153 messages🔥🔥):
LM Studio Model Performance, AI Roleplaying Game Management, Memory Management Issues, Implementation of RAG for PAM, Model Context Length Limitations
- LM Studio Model Performance and Issues: Users have reported issues with loading models like QVQ 72B in LM Studio, with specific mention that while MLX works fine, GGUF has bugs.
- Different builds also have varying performance, with an old bug causing errors in earlier versions now addressed in build 0.3.5.
- Managing AI in Roleplaying Games: Discussion around using AI models like Qwentile to manage tabletop RPG experiences has raised interest in how well these models can retain coherency over extended narratives.
- Models like Mistral and Qwen have been identified as capable for this purpose, with suggestions to refine approach through fine-tuning or chunking data.
- Memory Management Issues in Models: Concerns over memory leaks in MLX models prompted users to discuss their experiences with session management and RAM usage.
- Reported memory leaks have been acknowledged, with known issues being actively investigated by the development team.
- Implementing RAG in AI Systems: The use of Retrieval-Augmented Generation (RAG) was discussed as a method to enhance model memory and experience in managing RPG scenarios.
- Users were encouraged to seek out ready-made solutions for implementing RAG, as building from scratch requires significant coding skills.
- Context Length Limitations of Models: The limitations of context length in AI models was a key point of discussion, particularly how larger models like Llama 3.3 can manage 128k tokens.
- Strategies to retain pertinent setting information without overloading context were proposed, acknowledging that memory management remains a critical challenge.
- mradermacher/Qwentile2.5-32B-Instruct-GGUF · Hugging Face: no description found
- Crafting Effective ChatGPT Prompts for Tabletop Roleplaying Games: A Step-by-Step Guide (Part 1): Welcome to the first part of our series exploring the innovative intersection of tabletop RPGs and AI through the lens of ChatGPT.
- - YouTube: no description found
- Roleplaying Systems — Oracle RPG: no description found
- 0.3.5b9 Memory leak with MLX models · Issue #63 · lmstudio-ai/mlx-engine: Using an mlx conversion of a L3.3 b70 model in 8bit, each request seems to cause an huge memory leak. I've 33k context and each request uses around 10G of memory, which is roughly what the KVCache...
- LM Studio Beta Releases: LM Studio Beta Releases
- Oracle RPG: Guides & resources for playing roleplaying games such as Dungeons and Dragons solo.
- lms — LM Studio's CLI - CLI | LM Studio Docs: Get starting with the lms command line utility.
- Download LM Studio - Mac, Linux, Windows: Discover, download, and run local LLMs
- Chat with Documents - Running LLMs Locally | LM Studio Docs: How to provide local documents to an LLM as additional context
- OpenAI Compatibility API - API | LM Studio Docs: Send requests to Chat Completions (text and images), Completions, and Embeddings endpoints
LM Studio ▷ #hardware-discussion (101 messages🔥🔥):
X99 motherboard performance, GPU utilization for LLMs, Hardware recommendations for AI training, Multi-GPU setups, Low VRAM models for video generation
- X99 Motherboard Systems Prove Capable: Users discussed their older desktop systems running on X99 motherboards with Xeon E5 v4 CPUs, noting they handle models well despite their age.
- One user observed that their dual RTX 2060 setup can handle large models effectively, highlighting that upgrades may not be necessary for casual use.
- Optimizing GPU Utilization with LLM Studio: Concerns were raised about poor GPU utilization while running LLM Studio on a multi-GPU setup, with GPUs only reaching around 30% usage.
- Experts advised that increasing VRAM capacity doesn't necessarily improve inference speed due to memory latency, suggesting better performance with NVLink.
- Discussion on AI Hardware Recommendations: Users exchanged insights on motherboard and CPU combinations for building systems capable of training large AI models, emphasizing cost-efficiency and component compatibility.
- One user highlighted a Genoa server motherboard with dual CPUs as a viable option, while others shared experiences with various GPU setups.
- Challenges with Multi-GPU Configurations: Participants discussed the limitations of using multiple GPUs in builds, particularly regarding PCIe lane configurations and bandwidth limitations.
- It was noted that having more GPUs increases VRAM but does not necessarily translate to faster individual inference speeds.
- Exploring LoRAs for Video Models: Users speculated on future developments in video generation models, particularly concerning LoRAs trained with minimal data.
- One user expressed skepticism about training capabilities with just a few still images, while others discussed the implications for video quality.
- no title found: no description found
- Security Measure: no description found
- Thats The Neat Part You Dont Invincible GIF - Thats The Neat Part You Dont Invincible - Discover & Share GIFs: Click to view the GIF
- Asrock WRX90 WS EVO Motherboard - Opened Box Tested to BIOS | eBay: no description found
Stability.ai (Stable Diffusion) ▷ #general-chat (226 messages🔥🔥):
AI Image Generation Techniques, ComfyUI Usage Tips, Stable Diffusion Model Comparisons, Video Generation Capabilities, NSFW Protections in LoRA
- Exploring AI Image Generation Controls: Users discussed the intricacies of AI image generation processes, noting that more detailed prompts could yield better results depending on the model used.
- Participants emphasized that learning to prompt effectively is vital for achieving control over the generated images.
- ComfyUI for Generative Art: Many users expressed confusion about ComfyUI's complexities but found success with using symlinked models and the Stability Matrix for easier management.
- Members recommended tools like SwarmUI for a more accessible interface while also sharing their personal experiences with various AIs.
- Comparing Video Generation Models: Discussion revolved around the capabilities of img2video models in ComfyUI, with comparisons to models like Veo 2 and Flux for their effectiveness and resource demands.
- It was noted that LTX Video is suitable for users with 8GB of VRAM, while the community explores new video generation techniques.
- Efficiency of Superresolution Models: Questions arose about appropriate image sizes for training superresolution models, specifically regarding the feasibility of generating images at 1700 px.
- Users recognized that while high resolutions are desirable, they often demand significant resources that may not be feasible for all setups.
- Navigation of NSFW Features in LoRA Models: There was a humorous exchange about how to handle NSFW protections in LoRA, suggesting a need for transparency while navigating these settings.
- Community members shared their thoughts on the limitations and existing functionalities regarding NSFW options.
- GitHub - KONAKONA666/LTX-Video: LTXVideo Q8: LTXVideo Q8. Contribute to KONAKONA666/LTX-Video development by creating an account on GitHub.
- Reddit - Dive into anything: no description found
OpenAI ▷ #ai-discussions (183 messages🔥🔥):
OpenAI Outage and Alternatives, DeepSeek V3 Performance, Model Comparisons, ChatGPT Competitors, Acronym Understanding in LLMs
- OpenAI Outage Sparks Interest in Alternatives: Users reported issues with ChatGPT being down, prompting discussions about alternative AI models like DeepSeek, Google AI Studio, and Claude.
- Many expressed relief that the impact of the outage seems less significant compared to previous incidents, leading to exploration of other options.
- DeepSeek V3 Impressive Performance: DeepSeek V3 is noted for its performance, boasting a 64k context limit and faster response times compared to established models like GPT-4 and Sonnet 3.5.
- Users highlighted its ability to provide coherent coding support without minor errors, making it a strong candidate for development projects.
- Mixed Reactions to DeepSeek and File Support: While DeepSeek is praised for its efficiency, some users pointed out limitations such as its lack of direct file support, particularly compared to models that handle various formats.
- Still, the prospect of using OCR for document processing adds a layer of versatility, appealing to those seeking all-in-one solutions.
- Exploring Other Models Beyond ChatGPT: Users are considering shifting focus from GPT-4o to alternatives like DeepSeek V3 and Gemini AI for their projects, especially during ChatGPT issues.
- The community highlighted the importance of trying out new models to find the right fit for different programming tasks and needs.
- Challenges with Acronym Recognition in LLMs: There were discussions about how to effectively
Link mentioned: High error rates for ChatGPT, APIs, and Sora: no description found
OpenAI ▷ #gpt-4-discussions (33 messages🔥):
GPT-O3 Release, ChatGPT Down Status, Using GPTs in RPGs, Canvas Window Issues, Eslint Configuration
- GPT-O3 Release is Approaching: A member mentioned that O3-mini is due for release in late January, with O3 expected shortly after that.
- Another member expressed curiosity about the model's capabilities, highlighting that current information is scarce.
- ChatGPT Experiences Downtime: Multiple users reported issues accessing ChatGPT, with one specifically noting errors across different browsers and the mobile app.
- Responses varied from sarcasm about subscription choices to general assertions that indeed, ChatGPT was down.
- Funny NPC Concept Using GPT: A user proposed creating a talking NPC in an RPG that humorously mimics GPTs and their limitations, stating that it has no understanding of what it is actually saying.
- Another member concurred, drawing parallels to a comedic translating machine from Futurama.
- Canvas Window Malfunction: One user reported a broken canvas window that opens briefly before closing, seeking confirmation from others.
- No solution was shared, but it seems like a common frustration among users in the discussion.
- Eslint Configuration with O1 Pro: A member inquired about configuring settings with o1 pro, suggesting that if the eslint configuration is structured, it might be possible to pass relevant settings directly.
- This reflects ongoing interest in integrating better app functionalities with existing tools in their development environment.
OpenAI â–· #prompt-engineering (1 messages):
madame_architect: Why would a minute not be ok?
OpenAI â–· #api-discussions (1 messages):
madame_architect: Why would a minute not be ok?
Stackblitz (Bolt.new) â–· #prompting (7 messages):
ProductPAPI, Anthropic Concerns, Direct Code Modification, Claude Load Issues
- Gabe's ProductPAPI Under Construction: A member revealed that ProductPAPI is an app Gabe is developing to simplify tasks for users, but details remain scarce.
- No further information was provided on its functionality or launch timeline.
- Scalability Issues Affecting Providers: Concerns were raised about a drop in quality on Bolt when users switch to Anthropic's concise mode, impacting all major providers.
- This drop in quality indicates massive scalability issues faced by the platforms.
- Need for Direct Code Changes in Prompts: A member expressed frustration over receiving code instead of direct changes, seeking suggestions to improve their prompts.
- Another member recommended including a clear request in the prompt, stating 'please make the changes to my code directly.'
- Link Between Claude and Bolt Performance: Members discussed the correlation between the load on Claude and the performance of Bolt.
- One member speculated that encountering a demand warning on Claude might also indicate poorer performance on Bolt.
Stackblitz (Bolt.new) ▷ #discussions (198 messages🔥🔥):
Issues with Bolt token usage, Building applications using Bolt, Feature requests and feedback for Bolt, Community support and collaboration, Tool limitations and user experiences
- Frequent Token Usage Issues: Multiple users reported significant token consumption for basic operations in Bolt, with some feeling frustrated by the ignoring of prompts and unwarranted changes to code.
- One user expressed they spent over 1.5 million tokens with minimal success, leading to a reevaluation of their approach and frustration with the AI's performance.
- Rideshare App Development Confirmation: A new user inquired about the feasibility of building a rideshare app across the United States using Bolt, confirming their existing web-based portal for airport rides.
- Community members assured that since the platform is web-based, developing the desired application should be manageable.
- Feedback and Suggestions for User Experience: Users seek ways to provide feedback and suggestions for improving Bolt's user experience, with recommendations to submit feature requests via GitHub.
- A community member emphasized the importance of clear communication channels to help the development team prioritize user requests.
- Community Support for Debugging Issues: Users discussed various debugging issues and strategies, including the challenges faced when code changes didn't align with user input, leading to deeper investigation.
- Community suggestions included checking error messages, using GitHub for bug reporting, and consulting support resources for assistance.
- Challenges with Upgrading and Token Limits: Concerns were raised about the cost of subscription tiers and token limits for users transitioning from paid plans, with discussions on alternatives for continued use.
- Users were reminded that free coding is available in StackBlitz once token limits are reached, providing a temporary solution until funds allow for subscription renewal.
- Harmony - Where Finance Meets Mindfulness: no description found
- Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.: A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
- Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.: A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
- Bolters.io | Community Supported Tips, Tricks & Knowledgebase for Bolt.new No-Code App Builder: Documentation and guides for Bolt.new
- READ THIS FIRST: Critical information about Bolt.new's capabilities, limitations, and best practices for success
- Issues · stackblitz/bolt.new: Prompt, run, edit, and deploy full-stack web applications - Issues · stackblitz/bolt.new
Unsloth AI (Daniel Han) ▷ #general (132 messages🔥🔥):
QVQ Model Launch, Fine-Tuning Llama Models, DeepSeek V3 Discussion, Nvidia Driver Issues, Dataset Formatting for AI Training
- QVQ Model Just Released: The QVQ-72B 4-bit and 16-bit versions have been uploaded, showcasing significant capabilities in visual reasoning, positioned as a competitive alternative to existing models.
- The model's performance highlights include achieving 70.3% on the MMMU benchmark, indicating its potential in multidisciplinary tasks.
- Challenges in Fine-Tuning Llama Models: When fine-tuning Llama 3.2 3B, users reported difficulties with their trained model's performance, indicating that input data quality and formatting can greatly impact outcomes.
- Experts suggest that training success hinges more on quality of data rather than quantity, advising iterative adjustments based on trial and error.
- DeepSeek V3's MoE Architecture: DeepSeek V3 has sparked conversations regarding its size and capabilities, particularly in its use of Mixture of Experts (MoE) architecture, leading to speculation about OpenAI and Anthropic's models.
- The scaling efficiencies and computational savings it offers, notably being 50x cheaper than Sonnet, position it as a significant player in the AI landscape.
- Nvidia Driver Version Concerns: Discussions uncovered that running older Nvidia drivers, such as 535.161.07, could lead to compatibility issues when utilizing models in cloud environments.
- Users are encouraged to operate in Linux environments, especially using WSL for compatibility and better performance with libraries like Triton.
- Formatting Datasets for AI Training: For those fine-tuning models, the format of the training dataset—particularly whether it's a conversational or Q&A style—has been identified as critical to model effectiveness.
- One example presented included structured data with roles for system, user, and assistant, emphasizing the importance of adhering to proper formatting standards.
- No License: You’re under no obligation to choose a license and it’s your right not to include one with your code or project. But please note that opting out of open source licenses doesn’t mean you’re opting out ...
- Qwen QVQ + QwQ Collection - a unsloth Collection: no description found
- Qwen/QVQ-72B-Preview · Hugging Face: no description found
- deepseek-ai/DeepSeek-V3-Base · Hugging Face: no description found
- - YouTube: no description found
- Unsloth Documentation: no description found
- All Our Models | Unsloth Documentation: See the list below for all our GGUF, 16-bit and 4-bit bnb uploaded models
Unsloth AI (Daniel Han) â–· #off-topic (4 messages):
Sprint mode query, Coding datasets for instruction-tuning, Personal training experience for thesis
- Inquiry about Sprint Mode Timing: A member eagerly asked, 'when' the 'sprint mode' would be available, expressing curiosity with a clear sense of urgency.
- Attached was an image, but no specific details were provided about its content.
- Seeking Coding Datasets for LLMs: A member requested recommendations for 'coding datasets' specifically suited for instruction-tuning large language models, outlining a preferred format including problem description and generated solutions.
- They expressed a preference for datasets that focus on Python solutions.
- Personal Training Experience Shared: Another member shared insights about their recent personal experience, noting they had trained a model on their own data for their Bachelor's thesis.
- However, they downplayed the significance, stating, 'I wouldn't call it an experience.'
Unsloth AI (Daniel Han) ▷ #help (37 messages🔥):
SFT DPO Evaluation, Fine-tuning Llama 3.2 Vision, Using Unsloth with CPU, GGUF Conversion Issues, Model Performance Discrepancies
- SFT DPO Evaluation during Training: A member asked if it's possible to perform evaluation during training and compute metrics similar to the Hugging Face Transformers library using Unsloth.
- They inquired about the availability of examples in the documentation to support this functionality.
- Issues Fine-tuning Llama 3.2 with Text-only Dataset: A member reported issues with fine-tuning Llama 3.2 using a text-only JSONL dataset, claiming that Unsloth raises errors expecting image data.
- Discussion highlighted the need to disable vision layers, but the member sought further clarity on why errors persisted despite this setting.
- Running Trained Models on CPU: A member expressed difficulty in loading a GPU-trained model on a CPU-only local machine, seeking documentation on configuring Unsloth for CPU usage.
- Another member suggested quantizing the model into GGUF format, pointing out available guides in the Unsloth documentation.
- Challenges with GGUF Conversion: A member mentioned deteriorating model performance when converting a fine-tuned Llama 3.2 model to GGUF format using llama.cpp.
- It was suggested to ensure consistency in configuration and prompt format to avoid issues during conversion.
- Weird Responses from Local Model: A member reported receiving nonsensical responses from a tuned Mistral model when running it locally, compared to running on Colab.
- They sought assistance in diagnosing the cause of the strange output and how to improve response quality.
- unsloth-zoo/unsloth_zoo/peft_utils.py at main · unslothai/unsloth-zoo: Utils for Unsloth. Contribute to unslothai/unsloth-zoo development by creating an account on GitHub.
- Bug Fixes (#1470) · unslothai/unsloth@a240783: * Update llama.py * Update _utils.py * Update llama.py * Update llama.py * Update _utils.py * Update pyproject.toml * Update _utils.py * Update llama.py * CE Loss * Updat...
- All Our Models | Unsloth Documentation: See the list below for all our GGUF, 16-bit and 4-bit bnb uploaded models
Unsloth AI (Daniel Han) â–· #research (5 messages):
Stella recommendations, Mixed Bread models, Benchmarking and finetuning
- Inquiry about Stella not being recommended: A member expressed curiosity about why Stella was not recommended, prompting a discussion.
- Mrdragonfox admitted to having no exposure to Stella, indicating it might not have been well-established in the community.
- Mixed Bread models demonstrate capability: Another member highlighted that they use mixed bread models daily, affirming they are very capable depending on the application.
- They emphasized that the effectiveness of such models ultimately depends on the specific vertical they are applied to.
- Need for benchmarking and finetuning: A member pointed out the importance of benchmarking models against one's data to ensure performance.
- They also noted that in the end, finetuning may be necessary, as indicated by download numbers reflecting interest.
Perplexity AI ▷ #general (134 messages🔥🔥):
Perplexity AI usage concerns, Feedback on AI models, Job inquiries in AI, AI model comparisons, Subscription and access issues
- Challenges with model selection in Perplexity AI: Users reported issues selecting certain models, with one stating that selecting Sonar Huge defaults to Claude 3.5 Sonnet, highlighting frustrations with current selection limitations.
- Another user expressed concerns over not being able to select any model other than Claude 3.5 Sonnet and GPT-4o, indicating a potential bug.
- Perplexity AI model effectiveness and competition: Users debated the effectiveness of various AI models, with some noting that Gemini's Deep Research Mode is very competitive compared to others, particularly in terms of functionality and context handling.
- Discussion centered around the potential addition of newer models, with ongoing speculation about o1 being integrated into the service.
- Job inquiries and community engagement: A member expressed interest in job opportunities within the AI space, emphasizing their experience with projects involving NLP and generative AI.
- Community interaction occurred with humorous responses regarding the capabilities and expectations from potential candidates.
- Concerns over subscription access and usability: Several users faced challenges accessing Pro Channels, with one inquiring about special tricks to gain access, hinting at a lack of clarity in the access protocols.
- Users discussed their experiences with payment issues related to free subscriptions, seeking solutions.
- Feedback on AI tools for coding: A user sought recommendations for coding AIs, detailing their dissatisfaction with several tools and requesting high-quality, fast-output alternatives.
- Other community members provided suggestions while sharing personal experiences with different AI coding platforms.
- no title found: no description found
- Here Money GIF - Here Money Owe - Discover & Share GIFs: Click to view the GIF
- DeepSeek-V3 已悄咪咪上线网页端以å?Š API: 测试了一下 真的太强了 希望ä¸?è¦?涨价ðŸ˜
Perplexity AI ▷ #sharing (14 messages🔥):
NASA touches the Sun, Murder Hornets eradicated, Solar paint for EV charging, AI model from India, Body-heat powered wearables
- NASA Makes Historic Contact with the Sun: NASA's recent mission has successfully touched the Sun, gathering groundbreaking data about its outer atmosphere. This achievement could redefine our understanding of solar physics and its impact on Earth.
- Experts stress the importance of this mission as a key step in solar research.
- Murder Hornets Successfully Eliminated: Recent efforts have led to the eradication of Murder Hornets in various regions, with authorities confirming no new sightings. This initiative aimed to protect local ecosystems and bee populations.
- Experts label this success as 'critical for maintaining biodiversity' amid ongoing environmental challenges.
- Innovative Solar Paint Charges EVs: A new solar paint has been developed that can generate energy to charge electric vehicles, promising a cleaner future for transportation. Testing shows efficiency rates that are revolutionary for renewable energy technology.
- This innovation has been dubbed a breakthrough by researchers, sparking interest in sustainable urban development.
- Groundbreaking Indian AI Model Inspired by Yann LeCun: A new AI model from India claims to embody the vision of renowned researcher Yann LeCun, promising advancements in AI functionality and ethics. The model aims to enhance human-like reasoning and learning capabilities.
- Many in the AI community are optimistic about this development, citing its potential to transform model training processes.
- Wearables Powered by Body Heat Breakthrough: New body-heat powered wearable technology has emerged, allowing devices to operate without batteries. This advancement not only enhances convenience but also aims to promote a greener approach to personal electronics.
- Experts believe this could be a game-changer in the wearable market, pushing the boundaries of energy efficiency.
Link mentioned: YouTube: no description found
Perplexity AI â–· #pplx-api (4 messages):
Payment Processing, Virtual Cards, OpenRouter Credits, Perplexity Models
- Businesses Use Secure Payment Processing: Members discussed that payment processing is essential and that currently, there's no other option available.
- One member suggested using a virtual card, noting that most banks provide that feature for security.
- OpenRouter Offers Perplexity Models: A member expressed surprise after purchasing credits that OpenRouter offers access to Perplexity's models.
- Despite this discovery, they decided to stick with their chosen provider for now.
Latent Space ▷ #ai-general-chat (103 messages🔥🔥):
DeepSeek V3, OpenAI outages, ChatGPT memory improvements, RL training for LLM reasoning, Anduril partnership with OpenAI
- DeepSeek V3 Sets New Standards: DeepSeek V3 has been introduced with 685B parameters, showing remarkable performance with enhanced efficiency, using only 2.788M H800 hours for training.
- It demonstrates 60 tokens/second speed and impressive results on various benchmarks, raising the bar for cost-effective model training.
- OpenAI Experiences Major Outage: A significant outage has affected OpenAI, marking one of the worst monthly uptimes since January 2023, creating waves of disruption in API accessibility.
- Users have expressed concerns about API reliability, with implications for ongoing projects heavily relying on OpenAI's services.
- ChatGPT's Infinite Memory Feature: ChatGPT is rumored to roll out an infinite memory feature that allows access to past chats, aiming to enhance user interactions dramatically.
- This new capability is expected to unlock more seamless and natural conversations, boosting the utility of the AI.
- RL Training for Improved LLM Reasoning: A YouTube video shared highlights effective reinforcement learning techniques targeting LLM reasoning improvements, providing insightful training methodologies.
- The focus on reasoning indicates an evolution in LLM training, emphasizing practical applications in generating coherent and logical outputs.
- Anduril and OpenAI Collaboration Announcement: Anduril announced a partnership with OpenAI to integrate AI solutions for military applications, emphasizing enhanced decision-making and national security.
- This collaboration underlines a commitment to advancing responsible AI technologies in defense, amidst discussions on ethical considerations.
- Tweet from DeepSeek (@deepseek_ai): 🚀 Introducing DeepSeek-V3!Biggest leap forward yet:⚡ 60 tokens/second (3x faster than V2!)💪 Enhanced capabilities🛠API compatibility intact� Fully open-source models & papers� 1/n
- Building in the Era of Autonomous Software Development: no description found
- Tweet from Teortaxes▶� (@teortaxesTex): Ok folks, I've slept through a yet another update. Whale-3 is here, and it's better than ever.(from what I hear, VL2 API/web comes soon as well; V3.0 will not have image support)Quoting You Ji...
- Tweet from near (@nearcyan): omg someone deepfaked it for me its perfectQuoting near (@nearcyan) in a world of AI agents,the age of the Engineer is over.the time of the Idea Guy has come.
- Tweet from Teortaxes▶� (@teortaxesTex): And here… we… go.So, that line in config. Yes it's about multi-token prediction. Just as a better training obj – though they leave the possibility of speculative decoding open.Also, "muh 50K H...
- Tweet from Teortaxes▶� (@teortaxesTex): 4chan gooners made me think: �V3 may be the first time pruning (usually a meme) will work. The pitch of their finegrained MoE is «ultimate expert specialization», and @wzihanw developed a method for ...
- Tweet from wh (@nrehiew_): How to train a 670B parameter model. Let's talk about the DeepSeek v3 report + some comparisons with what Meta did with Llama 405B
- Tweet from Teortaxes▶� (@teortaxesTex): > $5.5M for Sonnet tierit's unsurprising that they're proud of it, but it sure feels like they're rubbing it in. «$100M runs, huh? 30.84M H100-hours on 405B, yeah? Half-witted Western h...
- Tweet from Lisan al Gaib (@scaling01): META could have trained DeepSeek-V3 at least 15 times using the compute budget of the Llama 3 model family ( 39.3 million H100 hours )Meanwhile DeepSeek only spent 2.6 million H800 hours (a handicappe...
- Tweet from Paul Gauthier (@paulgauthier): The "preview" of DeepSeek's new V3 model takes 2nd place on the aider polyglot leaderboard.62% o148% DeepSeek V3 Preview45% Sonnet38% Gemini-exp-120633% o1-minihttps://aider.chat/docs/lead...
- Tweet from main (@main_horse): just so many casual drops in the paper"oh, by the way: we don't need TP, because our SOTA pipelining scheme permits perfect compute-comm overlap with EP, by manually managing SM allocation && ...
- Tweet from wh (@nrehiew_): They have 2 types of RL rewards. Verifiers (code, math) and standard model based RM. Importantly the model based RM is trained COT style GRPO from deepseek math used here
- Tweet from Jon Kurtis âš¡ (@jonkkillian): @CSMikeCardona AI code is gonna be the best low code rebrand since mayo+ garlic became Aioli
- Tweet from Xeophon (@TheXeophon): 600B params, second place in aider’s new polyglot bench, topping Sonnet61.7% o148.9% � V345.3% Sonnet
- Tweet from Dean W. Ball (@deanwball): The Chinese AGI lab DeepSeek is reporting an insanely low training cost of only $5.5 million for their new v3 model, which seems to match Claude 3.5 sonnet performance.DeepSeek also has a credible “o1...
- Tweet from Alex Cheema - e/acc (@alexocheema): I will run Deepseek-V3-Base 685B on M4 Mac Minis or die trying.685B MoE with 256 experts -- perfect for Apple Silicon since they have a lot of GPU memory and only a small subset of params are active a...
- Tweet from xjdr (@_xjdr): i am personally embarrassed by having done training runs with more compute and obviously significantly less performance. this is the new bar for me for flop efficiency and i absolutely love itQuoting ...
- Tweet from Bret Taylor (@btaylor): The role of a software engineer is transforming from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that...
- Tweet from Jaime Sevilla (@Jsevillamol): DeepSeek-V3 is impressively efficient. It has 37B active params and was pre-trained on 14.8T tokens, amounting to 6 x 37B x 14.8T = 3e24 FLOP.That's 10x less compute than llama 3.1 430B, yet bette...
- deepseek-ai/DeepSeek-V3 · Hugging Face: no description found
- Tweet from Mark Kretschmann (@mark_k): The rumored ♾ (infinite) Memory for ChatGPT is real. It seems like the roll-out is imminent, after the holidays.The new feature will allow ChatGPT to access all of your past chats, unlocking much more...
- Tweet from DeepSeek (@deepseek_ai): 🚀 Introducing DeepSeek-V3!Biggest leap forward yet:⚡ 60 tokens/second (3x faster than V2!)💪 Enhanced capabilities🛠API compatibility intact� Fully open-source models & papers� 1/n
- Tweet from Kevin Kwok (@kevinakwok): Everyone talking about @btaylor's great post on autonomous software development. But his most prescient call was for wikipedia for data in 2008Imagine how much more would be in distribution of our...
- deepseek-ai/DeepSeek-V3-Base · Hugging Face: no description found
- - YouTube: no description found
- Tweet from Terry Yue Zhuo (@terryyuezhuo): Big congrats to @deepseek_ai!The V3 Chat model now ranks 1st on BigCodeBench-Hard.Complete -- 40.5%Instruct -- 28.4%Average -- 34.5%Gemini-Exp-1206 Average -- 34.1%o1-2024-12-17 (reasoning=medium) Ave...
- Tweet from Anduril Industries (@anduriltech): We’re joining forces with @OpenAI to advance AI solutions for national security.America needs to win.OpenAI’s models combined with Anduril’s defense systems will protect U.S. and allied military perso...
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): Dug into the config files a bit, key differences (according to the config files) v2 vs v3:vocab_size: v2: 102400v3: 129280hidden_size:v2: 4096v3: 7168intermediate_size:v2: 11008v3: 18432num_hidden_lay...
- Tweet from Andrej Karpathy (@karpathy): DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M).For reference, this level of capabil...
- Tweet from Anduril Industries (@anduriltech): We’re joining forces with @OpenAI to advance AI solutions for national security.America needs to win.OpenAI’s models combined with Anduril’s defense systems will protect U.S. and allied military perso...
- Tweet from John Rush (@johnrushx): Absolutely nobody predicted this: AI Code is the new NoCode.Honestly, I like talking to AI more than to human developers when building small apps. It understands me better, even with half-baked specs....
- DeepSeek-V3/DeepSeek_V3.pdf at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
Latent Space â–· #ai-announcements (3 messages):
2024 in Synthetic Data, 2024 in Agents, AI Engineer Summit NYC, Event Calendar Updates
- Loubna Recaps Synthetic Data & Smol Models: In the latest episode, Loubna Ben Allal shares insights on the top papers of 2024 concerning Synthetic Data and Smol Models.
- Key timestamps highlight topics like the rise of synthetic data and model collapse, paving the way for exciting discussions.
- Graham Neubig's Ambitious Agent Talk: In our final keynote, Graham Neubig explores the landscape of agents in 2024, showcasing powerful insights on their design and effective use.
- He provides a live demo and shares opinionated slides on the future of agents, addressing challenges in human-agent interaction.
- Save the Date for AI Events 2025: Mark your calendars for the AI Engineer Summit in NYC and other events slated for the first half of 2025.
- Stay updated by visiting Latent.Space events and subscribe to the calendar for new event notifications.
- Tweet from Latent.Space (@latentspacepod): ## Recapping 2024 in Synthetic Data/Smol Models!We are very honored to have @LoubnaBenAllal1 recap and pick all the best papers in:Synthetic Dataand Smol Modelsthis year!Timestamps[00:00:05] Loubna In...
- Tweet from Latent.Space (@latentspacepod): Our last keynote:Recapping 2024 in Agentswe save the most ambitious talk for last - we saked @gneubig, creator of the reigning #1 agent on SWE-Bench Full, to recap everything relevant to building agen...
- Latent Space (Paper Club & Other Events) · Events Calendar: View and subscribe to events from Latent Space (Paper Club & Other Events) on Luma. Latent.Space events. PLEASE CLICK THE RSS LOGO JUST ABOVE THE CALENDAR ON THE RIGHT TO ADD TO YOUR CAL. "Ad...
Interconnects (Nathan Lambert) ▷ #news (45 messages🔥):
DeepSeek V3 Launch, Multi-Token Prediction, Reward Model Techniques, Performance Comparisons, Model Training Techniques
- DeepSeek V3 Launch heralds major advancements: DeepSeek V3 has been introduced, boasting 60 tokens/second, which is 3x faster than its predecessor V2, with existing API compatibility and enhanced capabilities.
- The model is fully open-source and has been trained on 14.8 trillion tokens, showcasing significant engineering prowess under resource constraints.
- Innovative Multi-Token Prediction method revealed: DeepSeek utilized a novel Multi-Token Prediction (MTP) technique, which concatenates final representations with initial embeddings while maintaining a causal chain throughout predictions.
- This approach diverges from previous methods, potentially enhancing the effectiveness of the model without the complexities of multiple independent decoding heads.
- New Reward Model Techniques for Training: DeepSeek employs two types of RL rewards: verifiers for code/mathematics and a model-based RM method trained in a chain of thought style, which aids in refining outputs.
- The implementation is reported to enhance model performance strategically, but questions remain regarding the critique mechanism for non-definitive outputs like creative writing.
- Speculations and Comparisons with Existing Models: Several members expressed excitement about DeepSeek V3’s performance, noting it outperforms many open-source models and is comparable to models like GPT-4o and Claude-Sonnet-3.5.
- Discussion around the cost-effective scaling of resources used for training highlighted the efficient use of GPU hours compared to earlier iterations.
- Critique and Revision Debate: Members speculated on the effectiveness of using critique versus generating multiple outputs to select the best response, pondering the implications of computational efficiency.
- Questions arose regarding the integration of exogenous information in prompts, suggesting that clarity on the role of self-critique could lead to more effective model training results.
- Tweet from Xeophon (@TheXeophon): 600B params, second place in aider’s new polyglot bench, topping Sonnet61.7% o148.9% � V345.3% Sonnet
- Tweet from wh (@nrehiew_): They have 2 types of RL rewards. Verifiers (code, math) and standard model based RM. Importantly the model based RM is trained COT style GRPO from deepseek math used here
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): So.. V4 would likely not be Transformers? I wonder what direction would they lean toward!Quoting Vaibhav (VB) Srivastav (@reach_vb) The DeepSeek Technical Report is out!! 🔥Trained on 14.8 Trillion To...
- Tweet from Jiayi Pan (@jiayi_pirate): @YouJiacheng @teortaxesTex They said they are going to do a model update 12.25-27 might be relevant
- Tweet from Andrew Curran (@AndrewCurran_): @teortaxesTex Anthropic style.
- Tweet from Xeophon (@TheXeophon): Yup, Deepseek V3 is live for me
- deepseek-ai/DeepSeek-V3-Base · Hugging Face: no description found
- Tweet from DeepSeek (@deepseek_ai): 🚀 Introducing DeepSeek-V3!Biggest leap forward yet:⚡ 60 tokens/second (3x faster than V2!)💪 Enhanced capabilities🛠API compatibility intact� Fully open-source models & papers� 1/n
- Tweet from Tim Dettmers (@Tim_Dettmers): Reading the report, this is such clean engineering under resource constraints. The DeepSeek team directly engineered solutions to known problems under hardware constraints. All of this looks so elegan...
- Tweet from Phil (@phill__1): Deepseek V3 is live on their chat interface and supports images now
- Tweet from lmsys.org (@lmsysorg): The best open-source LLM, DeepSeek V3, has just been released! SGLang v0.4.1 is the officially recommended inference solution for it.The SGLang and DeepSeek teams worked together to support DeepSeek V...
- Tweet from wh (@nrehiew_): > After hundreds of RL steps, the intermediate RL model learns to incorporate R1 patterns, thereby enhancing overall performance strategically.
- DeepSeek-V3 已悄咪咪上线网页端以å?Š API: 请问为啥DeepSeek的多模æ€?ï¼Œä¸Šä¼ å›¾ç‰‡å¿…é¡»è¦?有文å—å•Š
- Tweet from Teortaxes▶� (@teortaxesTex): > $5.5M for Sonnet tierit's unsurprising that they're proud of it, but it sure feels like they're rubbing it in. «$100M runs, huh? 30.84M H100-hours on 405B, yeah? Half-witted Western h...
- Tweet from wh (@nrehiew_): Post training now. They FT on R1 (**NON LITE**) but say that it suffers from "overthinking, poor formatting, and excessive length"They have 2 types of data: 1) Standard synthetic data 2) A sys...
- Wow deepseek v3 ? : Posted in r/LocalLLaMA by u/Evening_Action6217 • 327 points and 46 comments
Interconnects (Nathan Lambert) â–· #ml-questions (5 messages):
Deepseek's Multi-head Latent Attention Mechanism, Implementations and Inference Libraries, Deepseek V2 Paper Insights, Deepseek V3 Inference Code
- Exploring Deepseek's Latent Attention: A member inquired if anyone has looked into Deepseek's Multi-head latent attention mechanism and mentioned a lack of detail in the V2 paper regarding low-rank approximations of weight matrices.
- They are currently working on creating a version and sought to know if others have implemented similar functionalities.
- Inference Libraries Offer Support: Another member suggested that inference libraries should already have implementations of deepseek mechanisms, highlighting SGLang's day-one support for V3.
- They also pointed out that vLLM, TGI, and hf/transformers have support for the new features, including Multi-head latent attention.
- Deepseek's Own Inference Code Available: A member provided a link to Deepseek's GitHub repo, specifically pointing out the inference code in model.py.
- This resource could assist those looking to implement or understand the new capabilities introduced in Deepseek V3.
- Checking Hugging Face for Implementations: The original inquirer noted they hadn't checked the Hugging Face side and planned to look into that.
- They expressed thanks for the information shared about the existing implementations.
Link mentioned: DeepSeek-V3/inference/model.py at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
Interconnects (Nathan Lambert) ▷ #ml-drama (15 messages🔥):
QvQ License Update, Bluesky Safety Concerns, AI Backlash from Data Scientists
- QvQ has a Permanent Apache 2.0 License: A member clarified that you can't retroactively update the license for QvQ, confirming there will always be an Apache 2.0 licensed version available for checkout.
- The license was then noted to have become more liberal than Llama, with another member jokingly mentioning the emergence of 'license wars'.
- Bluesky is Dangerous for Discussions: Concerns were raised about Bluesky's environment, described as not a safe place, with reports of an 'insane anti AI strain'.
- A member pointed out that the backlash against generative AI often comes from data scientists, expressing disdain for the technology while ignoring other problematic AI applications.
- OpenAI Addresses Team Concerns: A link shared by a member directed attention to a statement from OpenAI responding to queries about a former teammate, indicating ongoing discussions.
- This led to another member sharing intrigue in the form of a reaction emoji, highlighting the community's curiosity about the situation.
- Tweet from OpenAI Newsroom (@OpenAINewsroom): Here is a statement we provided in response to questions about our former teammate:
- Tweet from Casper Hansen (@casper_hansen_): I hate to be that guy, but you can't retroactively update the license. There will now forever be an Apache 2.0 licensed version of QvQ that you can git checkout
- Tweet from Esteban Cervi 🦌 (@EstebanCervi): @OpenAINewsroom �
Interconnects (Nathan Lambert) â–· #random (6 messages):
Meta paper on self-awareness in language models, Copyright lawsuits against AI companies, Anthropic and copyright issues, Public benefit ethos of AI companies
- Seeking Recent Meta Paper on Language Models: A user inquired about a recent (6 months) Meta paper discussing language models that recognize their own limitations but do not self-correct due to lack of awareness.
- The paper was referenced in an inactive thread, leaving details ambiguous.
- Legal Battle Over AI Copyrights: Discussion highlighted a copyright lawsuit against Ross Intelligence from Thomson Reuters, marking an early conflict in the emerging generative AI sector.
- As legal battles escalate, the outcomes may reshape the information ecosystem and AI industry.
- Critique of Anthropic's Practices: Concerns were raised regarding Anthropic's actions, with claims that the company harms the public interest by downloading numerous copyrighted books from illegal sources.
- Comments referred to this behavior as 'stealing the fire of Prometheus', suggesting a significant ethical conflict.
- Reaction to Anthropic's Alleged Copyright Violations: There was a humorous response to the serious critique of Anthropic, emphasizing the metaphor used and showing frustration over the situation.
- Responses suggested mixed feelings about the serious implications of the company’s actions.
Link mentioned: Every AI Copyright Lawsuit in the US, Visualized: WIRED is following every copyright battle involving the AI industry—and we’ve created some handy visualizations that will be updated as the cases progress.
Interconnects (Nathan Lambert) â–· #memes (4 messages):
Bluesky performance, New LLM release from DeepSeek
- Critique of Bluesky's Quality: A member remarked on the high bar for evaluating platforms like Bluesky, insinuating that it may not perform well.
- Just cross posting from twitter lol was noted in the discussion, suggesting a casual attitude towards sharing content.
- Potential Game-Changer: DeepSeek's New Model: Simon Willison predicted a potential exciting moment if a new openly licensed 685B LLM from DeepSeek dropped on Christmas Day, highlighting the model's potential size and impact compared to existing ones.
- As noted, it is over 700GB for download with no existing license in the repo, leaving questions about its accessibility and documentation.
- Iwona Bialynicka-Birula â?© (@iwonabb.bsky.social): Just kidding :)
- Tweet from Simon Willison (@simonw): Would be pretty fun if we ended 2024 with the best available LLM being an openly licensed 685B behemoth from a Chinese research lab that was released to Hugging Face on Christmas Day without so much a...
Interconnects (Nathan Lambert) â–· #nlp (3 messages):
Monte Carlo Tree Search, Iterative Preference Learning, Reasoning in LLMs
- Monte Carlo Tree Search Enhances Reasoning in LLMs: A recent paper introduces a method to improve reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process inspired by AlphaZero, employing Monte Carlo Tree Search (MCTS).
- This approach utilizes look-ahead ability to refine instance-level rewards into granular signals and incorporates Direct Preference Optimization (DPO) to update LLM policies.
- Self-Evaluation Enhancements in MCTS: The paper combines outcome validation and stepwise self-evaluation to improve consistency during the reasoning process.
- It emphasizes the need for on-policy sampled data for effective self-improvement and includes extensive evaluations on arithmetic and commonsense reasoning.
- Opinion on Model Choices: A member expressed confusion about the choice of models utilized in the study, questioning the quality as it seemed down-bad for May 2024.
- This comment reflects ongoing discussions about the adequacy of techniques in the context of MCTS and reasoning trajectories.
Link mentioned: Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning: We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process inspired by the successful strategy employed by ...
Interconnects (Nathan Lambert) ▷ #reads (8 messages🔥):
Effective RL Training for LLMs, DPO vs PPO, Reasoning in RL, Viewing Parties for Lectures
- Insights from CMU's RL Training Seminar: A seminar titled Towards effective RL training for LLMs was discussed, highlighting first half focuses on DPO vs PPO and PPO optimizations.
- The second part may appeal more to those interested in reasoning, particularly concerning PRM biases and Clip/Delta mitigations.
- DPO vs PPO Debates: Discussion emerged around the comparative benefits of DPO and PPO, especially in their relation to training LLMs more effectively.
- Is DPO superior to PPO? is the associated paper set to be presented at ICML 2024, further illuminating this critical area.
- Interest in Viewing Parties for Learning: A member suggested that viewing parties with open discussions on critical lectures and tutorial videos could be beneficial for collective learning.
- The response indicated a preference for gaining value over giving value, thereby sparking a conversation about collective engagement.
- PRM-Based Training Considerations: There was a nod to PRM-based training potentially offering more chances to shape rewards compared to ORM approaches.
- Questions arose around what methods incentivize better chains of thought (CoTs), underscoring the ongoing exploration of these concepts.
- Value in Repeated Video Viewing: One member mentioned their strategy of watching videos on repeat to retain information, indicating a personal learning technique.
- This sparked humor about the challenges of grasping complex topics, suggesting that shared viewing might enhance understanding.
Link mentioned: - YouTube: no description found
GPU MODE ▷ #general (12 messages🔥):
Learning CUDA Programming, DETRs and PyTorch, Shared Memory in CUDA, DeepSeek-V3 Training, Earning via Telegram Strategies
- Beginners should identify goals in CUDA: When learning CUDA programming, beginners need to focus on practical applications like job opportunities or performance-driven projects.
- Ultimately, aspiring CUDA programmers should have a personal objective that drives their learning journey.
- Seeking help with DETRs in PyTorch: A user is looking for assistance with their hobby project involving DETRs and PyTorch, having faced challenges for three months.
- Another member expressed willingness to help, showing familiarity with DETRs.
- Dynamic Memory Allocation in CUDA: While discussing CUDA programming, it was noted that shared memory cannot be allocated dynamically from device code.
- Consequently, using a C++ vector data structure in shared memory is deemed impractical.
- DeepSeek-V3 Achieves Cost Efficiency: A link to the DeepSeek-V3 documentation was shared, highlighting mass scale FP8 mixed precision training as a significant advancement.
- Remarkably, the project reportedly reduced costs by 2 OOMs, raising discussions on its quality concerning the funding received.
- Telegram Schemes for Fast Earnings: A user advertised a scheme claiming to assist others in earning $100k within 72 hours for a 10% profit share upon success.
- Interested individuals were directed to reach out via Telegram for involvement.
- Charles William: Spreading the wealth around the world.
- Visualize and understand GPU memory in PyTorch: no description found
- DeepSeek-V3/DeepSeek_V3.pdf at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
GPU MODE ▷ #triton (8 messages🔥):
Casting Issues in Triton, Device Printing in Colab, Infinity Feature in Triton, Triton Recompilation, Scam Alert
- Casting Issue between fp8 and bf16 in Triton: Casting from fp8 to bf16 fails on SM89 due to a ptx error, causing issues with the
triton.codegen_upcast_to_fp32
flag.- A workaround is to perform
.to(tl.float32).to(tl.bfloat16)
, though this may require adding a dummy operation to prevent fusion.
- A workaround is to perform
- Device Print doesn't Display in Colab: A user expressed frustration that
device_print
doesn't output anything in Colab, raising a potential bug or usage issue.- There isn't a confirmed workaround for this problem yet, leaving users seeking solutions.
- Infinity Behavior in Triton: In response to an inquiry about
tl.inf
, it was confirmed that usingfloat("inf")
works as a substitute for torch.inf in Triton.- This provides a functional equivalent for users needing infinity representation.
- When Triton Recompiles: A user asked a fundamental question regarding when Triton recompiles, indicating a need for clearer documentation on the recompile process.
- This question suggests potential confusion about the workflow and recompilation triggers in Triton.
- Warning About Potential Scam: A message offering a scheme to earn $100k within 72 hours appeared, asking for 10% reimbursement from participants, raising red flags about a potential scam.
- This situation warrants caution, with users being advised to report or avoid engagement with such dubious offers.
- Charles William: Spreading the wealth around the world.
- Casting to bf16 from fp8 breaks on SM89 · Issue #5491 · triton-lang/triton: Describe the bug Hello, Casting from fp8 to bf16 in triton fails on SM89 because of a ptx error. I am opening this issue because this causes {"triton.codegen_upcast_to_fp32": False} to fail ...
GPU MODE ▷ #cuda (14 messages🔥):
TMA vs cp.async for GEMM, DETRs and PyTorch inquiries, Performance requirements for WGMMA, CUTLASS discussion on Hopper structured sparse GEMM, Earning methods shared on social media
- TMA offers superior efficiency for GEMM: Members discussed why TMA (Tensor Memory Access) is needed for efficient GEMM on Hopper while cp.async is sufficient on Ada, noting that TMA aids in freeing up registers due to the increased flops on systems like the H100.
- One member contributed that TMA is async, supporting concurrent operations, provides bounds checking, and allows for bulk scheduling.
- Challenge with DETRs project: A member expressed frustration with their DETRs and PyTorch-related hobby project, indicating they've been stuck for three months.
- Another member humorously suggested engaging more channels might yield better responses, leading to a light-hearted exchange about channel posting.
- WGMMA performance relies on registers: Discussion emphasized the need for inputs to be in shared memory while accumulation must remain in registers for WGMMA; it was noted that a microbenchmarking paper indicated one input needs to be in registers for peak performance.
- Members concluded there isn't a significant performance difference outside of structured sparse FP8 compared to earlier MMA methodologies.
- Insights on CUTLASS 3.6.0 enhancements: A link was shared regarding CUTLASS 3.6.0, detailing improvements in Hopper structured sparse GEMM for FP16, FP8, INT8, and TF32.
- The discussion highlighted the alignment of the convolution kernel API with gemm::GemmUniversal for performance improvements.
- Social media earning scheme proposal: One member posted an invitation to help others start earning $100k within 72 hours, with a stipulation of a 10% reimbursement.
- The post encouraged interested parties to connect privately, generating interest albeit mixed responses.
Link mentioned: CUTLASS 3.6.0 · NVIDIA/cutlass · Discussion #2013: Hopper structured sparse GEMM. FP16 FP8 INT8 TF32 A refactor to the CUTLASS 3.x convolution kernel::ConvUniversal API to bring it in line with gemm::GemmUniversal. Now the 3.x convolution API is no...
GPU MODE â–· #torch (3 messages):
Guard Functions Impact, Earning Strategies
- Concerns about Guard Functions in Code: A member inquired about the performance impact of having multiple guard functions in their code, addressing whether reducing them might enhance efficiency.
- Is it an antipattern to overthink the number of guards? They seek guidance from the community on the ideal balance.
- Quick Wealth Earning Opportunity: Another member offered assistance to the first 20 interested individuals on how to earn $100k within 72 hours with a 10% profit reimbursement requirement.
- They encouraged direct contact via Telegram for those who are genuinely interested in this scheme.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #algorithms (1 messages):
Earning Strategies, Telegram Outreach
- Quick Profit Scheme Promoted: A member is offering assistance to the first 20 people interested in earning $100k within 72 hours, with a requirement to reimburse 10% of profits.
- Those interested are encouraged to send a friend request or direct message, emphasizing Telegram as the primary communication tool with the handle @CharlesWilliam26.
- Emphasis on Fast Earnings: The message promotes a rapid earning strategy, suggesting that participants can receive significant profits in a very short time frame.
- The approach includes a personal touch by requiring interested individuals to actively engage through DMs or friend requests.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #cool-links (2 messages):
Character.AI Inference Optimization, Earning Money Online, Telegram Offers
- Character.AI Optimizes AI Inference: Character.AI focuses on efficient inference to enhance user experiences, examining techniques like multi-query attention and int8 quantization here. Their custom int8 attention kernel significantly improves inference speed for both compute-bound and memory-bound tasks.
- This build on previous research that emphasized memory efficiency and reducing the KV cache size for better performance.
- Fast Track to $100k Earnings: An individual offers guidance to earn $100k within 72 hours, charging a 10% fee from the profits once received. Interested users are prompted to connect via friend request or direct message for more details.
- Contact can be made through Telegram at Charles William, who promotes a wealth-sharing initiative.
- Optimizing AI Inference at Character.AI (Part Deux): At Character.AI, we’re building personalized AI entertainment. In order to offer our users engaging, interactive experiences, it's critical we achieve highly efficient inference, or the process b...
- Charles William: Spreading the wealth around the world.
GPU MODE â–· #beginner (5 messages):
Learning CUDA and Triton, vLLM Token Throughput Analysis, Sequence Stacking in Attention Mechanisms, Optimized Attention Implementations, Earning Opportunities
- Path to Mastering CUDA and Triton: A member inquired about the best way to learn lower-level ML techniques like CUDA and Triton, mentioning resources like PMPP and coding puzzles.
- Is there an order to do these things in?
- Investigating vLLM’s TTFT with xFormers Backend: One member is profiling vLLM’s Token Throughput/First Token performance analysis using the xFormers backend, linking to relevant code.
- They questioned why vLLM does not use batched inference during the prefill stage.
- Understanding Sequence Stacking Benefits: Another member compared vLLM’s attention mechanism to FlashAttention’s implementation, describing it as 'sequence stacking' which allows for efficient handling of variable lengths.
- They provided a link to a blog post discussing this, emphasizing the trade-off between performance and flexibility.
- Challenges with Attention Variants: A discussion highlighted that while optimized attention implementations like FlashAttention improve performance, they complicate the testing of new attention variants.
- Members noted the drawbacks of being tied to existing kernels and the potential for slow runtime if variants don't fit those frameworks.
- Opportunity to Earn Quick Cash: A member announced an offer to help 20 individuals earn $100k within 72 hours, requiring a 10% reimbursement of profits.
- They encouraged interested individuals to connect via Telegram for details on the program.
- FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention: no description found
- Charles William: Spreading the wealth around the world.
- vllm/vllm/attention/backends/xformers.py at dbeac95dbbf898bcc0965528fc767e9cadbbe0c5 · vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm
GPU MODE â–· #pmpp-book (3 messages):
PMPP lectures, Earning strategies
- PMPP Lectures Shine Bright: A member praised the lectures by PMPP authors as pure gold, stating they help significantly in understanding the corresponding chapters from the book.
- Good to know, just ordered the book! expressed another member, highlighting the value of the lectures.
- Earning $100k in 72 hours Scheme: A user announced they will help the first 20 interested people on how to start earning $100k within 72 hours for a 10% reimbursement of their profits.
- They encouraged users to send a friend request or a DM to ask how, providing a Telegram link for direct contact.
- AUB Spring 2021 El Hajj: no description found
- Charles William: Spreading the wealth around the world.
GPU MODE â–· #jax (1 messages):
Earning $100k quickly, Investment schemes, Telegram outreach
- Charles Offers Fast Cash: Charles William promises to help the first 20 people earn $100k within 72 hours but requests a 10% reimbursement of profits when earned.
- Interested individuals are encouraged to send a friend request or DM him, with a link provided to his Telegram for direct outreach.
- Telegram as a Communication Tool: He emphasizes the use of Telegram for interested individuals to contact him directly, stating it facilitates quick communication.
- Charles presents himself as a wealth distributor, aiming to spread wealth globally through this initiative.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #torchao (1 messages):
Earning opportunities, Telegram outreach, Profit sharing
- $100k in 72 hours scheme: A member is offering assistance to the first 20 interested people on how to start earning $100k within 72 hours, with a 10% profit reimbursement upon receiving it.
- They encouraged potential participants to either send a friend request or a direct message to inquire more, emphasizing urgency in reaching out.
- Contact via Telegram: The member provided their Telegram link to facilitate immediate contact for further details about the earning opportunity here.
- They are actively promoting engagement, stating they are focused on spreading wealth worldwide, enticing those interested to reach out.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #sequence-parallel (1 messages):
Earning $100k, Profit reimbursement, Telegram contact
- Quick Money-Making Blueprint: Charles William is offering guidance to the first 20 people interested in learning how to start earning $100k within 72 hours.
- Participants will need to reimburse him 10% of their profits once received, urging interested individuals to reach out via Telegram.
- Telegram Communication: Interested individuals can directly contact Charles on Telegram via the provided link.
- His handle is @CharlesWilliam26 where he encourages potential participants to DM him for further details.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #off-topic (1 messages):
Earning $100k in 72 hours, Charles William's proposition, Reimbursement of profits, Telegram contact
- Unlock $100k in Just 72 Hours: A user shared an intriguing proposition, offering guidance to the first 20 people on how to start earning $100k within 72 hours with potential profits.
- Interested individuals are encouraged to reach out for details or send a friend request.
- 10% Reimbursement Hack: Participants will have to reimburse 10% of their profits back to the provider when they receive their earnings, creating a profit-sharing model.
- This reimbursement model raises questions about sustainability and ethics in profit-sharing schemes.
- Contact Charles via Telegram: For direct inquiries, Charles has provided a Telegram link for interested individuals to reach out quickly.
- He emphasizes spreading wealth around the world, positioning himself as a financial mentor.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #irl-meetup (1 messages):
Earning $100k, Telegram Contact, Profit Sharing Strategy
- Quick Cash Strategy: Earn $100k in 72 hours: A member claims they can help the first 20 people earn $100k within 72 hours, proposing a 10% profit reimbursement after earnings are received.
- Interested individuals are encouraged to send a friend request or DM for details and are provided with a Telegram link for direct contact.
- Connect via Telegram for Quick Earnings: The member provided their Telegram handle @CharlesWilliam26, inviting individuals to reach out for assistance in earning money.
- They emphasize the immediacy of connection via Telegram, highlighting a sense of urgency in the proposal.
- Spreading Wealth Across the Globe: The member expresses a desire for financial distribution, stating they are spreading the wealth around the world.
- This phrase suggests a broader motivation behind their endeavor beyond personal profit.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #hqq-mobius (1 messages):
Earning Strategies, Telegram Networking
- Get Rich Quick: Earn $100k in 72 Hours: An offer was extended to the first 20 people interested in learning how to earn $100k within 72 hours, with a catch of reimbursing 10% of their profits afterwards.
- Only those genuinely interested were encouraged to send a friend request or direct message for details, emphasized by a direct link to Telegram.
- Connect with Charles on Telegram: Interested parties can reach out to Charles directly via his Telegram handle @CharlesWilliam26 for more information.
- The message highlighted the notion of spreading wealth around the world, aligning with a community-driven approach.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #triton-viz (1 messages):
Earning $100k strategy, Telegram outreach
- Opportunity to Earn $100k in 72 Hours: A member is offering assistance to the first 20 interested people on how to start earning $100k within 72 hours, with a 10% reimbursement of profits expected.
- They encourage interested individuals to send a friend request or DM, stating to ask me HOW! for more details.
- Telegram Contact for Assistance: For those interested in the earning opportunity, Charles William suggests contacting him via Telegram at this link.
- He promotes the idea of spreading the wealth around the world, implying a broader financial strategy.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #llmdotc (1 messages):
Earning $100k in 72 hours, Profit-sharing model, Telegram outreach, Investment opportunities, Financial advice
- Earn $100k in 72 hours? No problem!: A user has offered to help the first 20 interested individuals start earning $100k within 72 hours, with a condition to reimburse 10% of profits.
- Interested parties are encouraged to send a friend request or direct message for more information.
- Join the Telegram Investment Group: The user provided a Telegram link for direct contact, emphasizing immediate outreach for those interested.
- Spreading the wealth around the world is framed as part of the user's initiative in financial advising.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #rocm (1 messages):
Earning opportunities, Profit-sharing scheme, Telegram outreach
- Earn $100k in 72 hours?: A user offers guidance to the first 20 interested individuals on how to start earning $100k within 72 hours but requests a 10% reimbursement of profits when received.
- Potential participants are encouraged to send a friend request or DM for more details and can reach out via Telegram at this link.
- Telegram Connection with Charles: Users can connect with Charles William directly on Telegram for personalized advice on earning quickly.
- Charles promotes a wealth-sharing mindset, inviting individuals to join his profit-making venture through a Telegram message.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #intel (1 messages):
Earning $100k, Charles William's Offer, Telegram Outreach
- Earn $100k within 72 hours: Charles William offers to guide the first 20 people on starting to earn $100k within 72 hours, requesting 10% reimbursement of profits upon receipt.
- Interested individuals are encouraged to either send a friend request or DM Charles for more information.
- Contact Charles via Telegram: Individuals with Telegram can connect with Charles directly through the provided link to discuss the earning opportunity further.
- He promotes his outreach with the tagline 'Spreading the wealth around the world.'
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #lecture-qa (1 messages):
Earning $100k in 72 hours, Reimbursement profit model, Telegram contact for details
- Learn to Earn $100k Fast: A member is offering assistance to the first 20 interested people on how to start earning $100k in just 72 hours with a profit sharing model.
- Interested individuals are prompted to send a friend request or a direct message for more information.
- 10% Reimbursement Proposal: The proposed model requires a 10% reimbursement of profits once received, creating a profit-sharing arrangement.
- Ask me HOW! suggests a call-to-action for those curious about the details.
- Direct Contact via Telegram: Those interested can contact Charles directly on Telegram through the provided link for immediate assistance.
- The member aims to spread wealth and is encouraging quick engagement through social media.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #bitnet (7 messages):
No backpropagation training method, Energy-efficient model training, Random walk sampling technique, Discussion on gradient methods
- No Backprop, No Problem?: A user questioned the feasibility of a new training method that claims to work without backpropagation or momentum, expressing curiosity about its potential impact on ML training.
- Another member noted skepticism regarding its practical effectiveness beyond basic experiments, particularly with the 128 forward passes needed for gradient estimation.
- Energy-efficient Training Breakthrough: A paper quoted in discussions suggests this new method could facilitate training with 1.58B operations, utilizing 97% less energy and 90% less memory than traditional methods.
- It also proposes a model format capable of storing a 175B model in roughly 20MB, raising expectations about resource-efficient AI capabilities.
- Sampling Techniques in Random Walks: A member described a multidimensional random walk sampling methodology that retains walks reducing loss while discarding inferior variants.
- The community is intrigued by the implications of this method, particularly concerning its potential associations with gradient computation.
- Batch vs Mini-Batch Gradient Discussion: In response to discussions on the new training method, a user drew parallels to the traditional batch versus mini-batch gradient techniques.
- This comparison aims to highlight the differences in training dynamics and efficiency that may accompany the new algorithm.
- Tweet from Ethan (@torchcompiled): This is a cool idea, but you won't have a good time past the MNIST toy example. No backprop means needing... 128 forward passes, for grad estimate with only 0.009 cos similarity with true grad.inc...
- Charles William: Spreading the wealth around the world.
- Tweet from Will (@_brickner): I woke up late, here is a cpu implementationhttps://colab.research.google.com/drive/1hXzf5xB4INzMUNTlAB8CI1V10-JV7zyg?usp=sharing
GPU MODE â–· #arm (1 messages):
Earning $100k, Profits reimbursement, Networking on Telegram
- Quick scheme to earn $100k: Charles William offers to help the first 20 people interested in starting to earn $100k within 72 hours, in exchange for a 10% reimbursement of their profits.
- He encourages interested individuals to send him a friend request or a DM, and provides a Telegram link for immediate contact.
- Networking through Telegram: Charles emphasizes the use of Telegram for communication about the earning scheme, ensuring quick access for those who reach out.
- His profile promotes the concept of spreading wealth around the world, appealing to potential participants to engage directly.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #sparsity-pruning (2 messages):
Sparsification in PyTorch, Earning Strategies
- Understanding Sparsify_ Functionality: The
sparsify_
function in PyTorch requires a model with a zeroed out dense matrix, which is produced by theSparsifier
, to effectively compress weights.- Any zeroed out dense model can be used with
sparsify_
, allowing for custom masking solutions, as mentioned in the documentation.
- Any zeroed out dense model can be used with
- Quick Cash Earning Scheme Offered: A member proposed a quick scheme for earning $100k within 72 hours, asking interested individuals to contact them directly for details.
- Participants are required to reimburse 10% of profits once received, with the member promoting communication through their Telegram account.
- Charles William: Spreading the wealth around the world.
- ao/torchao/sparsity/README.md at 567cb46409f5f9a761429a87d27b1d5312642888 · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
GPU MODE â–· #webgpu (1 messages):
Earning $100k, Profit Sharing Model, Telegram Outreach
- Interest in Earning $100k in 72 Hours: A member offered to help the first 20 interested individuals start earning $100k within 72 hours, requesting a 10% profit reimbursement upon receiving their profits.
- Only interested people should send a friend request or DM, and they can contact him directly on Telegram for more details.
- Profit Sharing as a Business Model: The profit-sharing model proposed involves clients reimbursing 10% of their profit, aligning incentives between the member and the participants.
- This strategy aims to attract motivated individuals ready to invest in their potential earnings while establishing a collaborative relationship.
- Telegram Usage for Communication: The member encouraged users to reach out via Telegram for immediate assistance, providing a link to his profile.
- This approach emphasizes real-time communication and networking among those interested in financial growth.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #liger-kernel (1 messages):
Earning Strategies, Telegram Networking
- Earn $100k in 72 Hours: Charles William offers guidance for the first 20 interested individuals on how to start earning $100k within 72 hours under a unique reimbursement model.
- Prospective participants must send a friend request or DM for details, and a 10% reimbursement is required upon receiving profits.
- Connect with Charles on Telegram: Individuals interested in his offer are directed to reach out via Telegram using the link provided.
- Charles encourages potential earners to communicate and explore options for wealth distribution in his Telegram group.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #metal (2 messages):
Running .air files on iPad, Fundraising strategies
- Inquiry about .air file compatibility: A user asked if it's feasible to take a .air file compiled for macOS and run it on iPad.
- No responses providing clarity were seen in the current messages.
- Earning $100k in 72 hours offer: A user claimed they could help the first 20 people earn $100k within 72 hours, requesting a 10% reimbursement of profits.
- Interested individuals were directed to send a friend request or DM, and a Telegram link was provided for direct communication.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #avx (1 messages):
Earning $100k swiftly, Profit sharing model, Telegram outreach
- Rapid $100k Earning Strategy: Charles William offered to assist the first 20 interested individuals in starting to earn $100k within 72 hours.
- Interested parties are encouraged to send a friend request or DM to inquire about how to get started.
- 10% Profit Reimbursement Agreement: Participants are required to reimburse 10% of their profits to Charles once they receive their earnings.
- This model aims to incentivize commitment from those who take part in the earning opportunity.
- Contact Charles via Telegram: Charles provided a Telegram link for direct communication, enabling quicker responses for those wanting information.
- Users can reach out at CharlesWilliam26 for immediate assistance.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE ▷ #� (1 messages):
Earning $100k in 72 hours, Profit-sharing business model, Telegram outreach
- Learn to Earn $100k Fast: A member offered to help the first 20 people interested in starting to earn $100k within 72 hours, requesting 10% of profits as reimbursement once received.
- Ask me HOW! for details on this opportunity, with interested individuals directed to send a friend request or DM for more information.
- Connect with Charles on Telegram: Interested individuals are encouraged to contact Charles William via his Telegram link for direct assistance.
- Charles promotes his services by stating he is spreading the wealth around the world.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE â–· #edge (1 messages):
$100k in 72 hours, Profit Reimbursement, Telegram Outreach
- $100k in 72 hours opportunity: A member offered to help the first 20 people interested in learning how to start earning $100k within 72 hours.
- Interested individuals are encouraged to reach out via friend request or direct message to ask how.
- Reimbursement agreement: The proposal includes a 10% reimbursement of profits once received, incentivizing engagement and potential collaboration.
- This creates a straightforward profit-sharing aspect that participants must consider before joining.
- Direct Telegram contact encouraged: Those interested can contact Charles directly through his Telegram for immediate assistance.
- This direct outreach aims to expedite the connection between the offer and potential participants.
Link mentioned: Charles William: Spreading the wealth around the world.
GPU MODE ▷ #arc-agi-2 (9 messages🔥):
Oreo Code Release, Hugging Face TRL Library, ARC-AGI-2 Repository, Chollet's Views on VLMs, 1D Task Generators
- Oreo's Code is MIA but RL Repos Shine: There’s no code for Oreo available, but notable reinforcement learning repositories like LaTRO are mentioned.
- It’s suggested to use CPU for small models—preparing locally before switching to cloud GPU for training.
- Check Out Hugging Face's TRL Resources: Members are encouraged to explore Hugging Face's TRL documentation for tools training transformer models via Reinforcement Learning.
- The TRL library covers the complete workflow from Supervised Fine-tuning to Proximal Policy Optimization.
- All Roads Lead to ARC-AGI-2 Repository: A member shared their intent to gather materials and experiments related to ARC-AGI-2 on GitHub.
- Another member expressed enthusiasm for learning and contributing to the repository in the upcoming year.
- Chollet Critiques Visual Language Models: Chollet argues against the efficacy of VLMs in overcoming benchmark challenges compared to strict LLMs, stating that ARC-AGI is fundamentally a 2D symbolic reasoning task.
- In contrast, a member expressed that 2D positional encoding appears beneficial, questioning the required size of vision encoders for smaller tasks.
- Innovative 1D Task Generators Take Flight: A member created 1D task generators (75 types currently) to facilitate fast method iterations and potentially extrapolate findings to 2D tasks, with their code available on GitHub.
- Contributors are welcomed to contribute to the task generators while visualizations showcase various task formats.
- Tweet from Mikel Bober-Irizar (@mikb0b): When models can't understand the task format, the benchmark can mislead, introducing a hidden threshold effect.And if there's always a larger version that humans can solve but an LLM can't...
- Tweet from François Chollet (@fchollet): If ARC-AGI required visual perception, you'd see VLMs outperform strict LLMs -- by a lot. Everyone tried VLMs during the 2024 competition -- no one got better results. Every single top entry used ...
- TRL - Transformer Reinforcement Learning: no description found
- GitHub - open-thought/arc-agi-2: Building the cognitive-core to solve ARC-AGI-2: Building the cognitive-core to solve ARC-AGI-2. Contribute to open-thought/arc-agi-2 development by creating an account on GitHub.
- GitHub - SalesforceAIResearch/LaTRO: Contribute to SalesforceAIResearch/LaTRO development by creating an account on GitHub.
- GitHub - optozorax/arc_1d: ARC-AGI like tasks generators in 1D: ARC-AGI like tasks generators in 1D. Contribute to optozorax/arc_1d development by creating an account on GitHub.
- ARC Tasks Overview: no description found
Notebook LM Discord â–· #use-cases (7 messages):
Podcast integration with Google News, AI-generated podcasts, Summarizing Pathfinder adventures, Audio Overviews of news articles, Chatbots in everyday scenarios
- Podcast integration with Google News gains interest: A member suggested that integrating podcasts with Google News to summarize the top 10 stories could create dynamic content with both short and long versions.
- They noted the potential for enabling users to ask questions, enhancing interaction and engagement.
- AI takes the lead in generating podcasts: A user celebrated an AI-generated podcast that philosophically explored life's biggest questions, highlighting its smurf-tastic banter.
- This unique format promises an engaging experience, combining humor and deep thought, surely appealing to a wide audience.
- Pathfinder stories summarized in 15 minutes: One member shared their experience using AI to generate a podcast summarizing a 6-book series for Pathfinder 2, providing GMs with a concise overview.
- This efficiency in storytelling showcases a novel use of AI for gaming narratives.
- Impressive Audio Overviews of articles: A user reported generating Audio Overviews of news and Wikipedia articles that sound natural and well-paced, improving the listening experience.
- They highlighted the AI's ability to include current context, making the content feel more relevant and engaging.
- Chatbots create elevator comedy: A whimsical concept of chatbots creating comedic scenarios in an elevator was shared, with imagined dialogue showcasing their quirks.
- This playful take highlights the potential for AI's humor and interaction in mundane situations, sparking laughter.
Notebook LM Discord ▷ #general (54 messages🔥):
PDF Upload Issues, Podcast Customization, Language Settings, Feature Requests, AI Podcast Sharing
- PDF Upload Problems Persist: Users reported encountering an error message stating, 'an error occurred while uploading the font' when trying to upload PDF files.
- One user was advised to refresh their page to resolve the issue, while others shared their similar experiences.
- Need for Podcast Customization: A member expressed frustration about the podcast hosts often veering off-topic during recordings, desiring a more focused approach to the content.
- Another suggested using specific prompts to generate more structured podcasts, which transformed a promotional podcast into a detailed tutorial.
- Adjusting Language Settings: A user inquired about forcing NotebookLM to generate content exclusively in English, even when their native language was being used.
- Another member suggested logging out of the Google account, selecting the preferred language, and then logging back in.
- Suggestions for Feature Improvements: There was a discussion about whether to submit suggestions as feature requests or bugs, with encouragement for users to share ideas directly.
- One member highlighted the need for feedback options as well, emphasizing communication with the engineering team.
- Exploring AI Podcast Sharing: A user introduced a platform called Akas where AI-generated podcasts can be shared, embedded, or generated as RSS feeds.
- They highlighted the potential for Akas to serve as a bridge between AI-generated content and users' personal voices in podcasting.
- Akas: share AI generated podcasts: Akas is the ultimate platform for sharing AI-generated podcasts and your own voice. With more and more podcasts being created by AI, like those from NotebookLM and other platforms, Akas provides a sea...
- no title found: no description found
- - YouTube: no description found
- - YouTube: no description found
- - YouTube: no description found
- Upgrading to NotebookLM Plus - NotebookLM Help: no description found
LlamaIndex â–· #blog (1 messages):
Report Generation Agent, LlamaParse, LlamaCloud
- Build a Report Generation Agent from Scratch: A fantastic video by @fahdmirza demonstrates how to build an agentic workflow that can generate a formatted report over a set of PDF research papers using an input formatted template.
- Enhancing Report Automation: The discussion highlighted the potential of generating automated reports from various research sources, broadening the scope of traditional analysis.
- Using advanced tools, this method promotes efficiency and consistency in research reporting.
LlamaIndex ▷ #general (36 messages🔥):
LlamaIndex and OpenAI integration, DocumentContextExtractor proposals, Tokenization and truncation issues, Generating LlamaIndex documentation, Payroll PDF parsing solutions
- LlamaIndex potentially lacks batch processing: A member inquired if the LlamaIndex LLM class could interface with the OpenAI/Anthropic Message Batching API, but responses indicated that existing abstractions focus on real-time interactions.
- Another member suggested using the raw OpenAI client instead, expressing willingness to review any proposed pull requests for enhancements.
- DocumentContextExtractor offers cost-saving batch processing: An innovative use case for the DocumentContextExtractor was shared, highlighting how batch processing could reduce costs by 50% and provide a stateless solution, allowing for off-hours processing.
- The member mentioned that with this approach, users wouldn't need to keep a Python script running indefinitely, checking back later to review processing status.
- Tokenization limitations drive frustration: A user expressed frustration over the LlamaIndex tokenizer only offering encoding without decoding capabilities, questioning the utility of such limitations.
- Responses suggested using a splitter and managing chunk sizes, but one user humorously considered removing truncation features, blaming users for large document submissions.
- Request for LlamaIndex documentation in various formats: Another member inquired about obtaining the LlamaIndex documentation in formats such as PDF or markdown for building a RAG app, prompting discussion on possible methods to generate it.
- Responses indicated that generating the documentation in the desired formats is feasible, with a suggestion to continue the conversation through direct messaging.
- Addressing payroll PDF parsing challenges: A member indicated struggling to parse a payroll PDF using LlamaParse, asking for better alternatives for this task.
- Responses noted that LlamaParse should work well, particularly in its premium mode, suggesting it might be effective for the member's needs.
LlamaIndex â–· #ai-discussion (3 messages):
Unstructured RAG, LangChain, Unstructured IO, Athina AI, LlamaIndex
- Unstructured RAG blog highlights clear benefits: A blog was shared about Unstructured RAG using LangChain and Unstructured IO, discussing how traditional systems face challenges with unstructured data like images and tables.
- It emphasizes Unstructured IO's role in organizing raw data, making it easier for retrieval-augmented generation (RAG).
- RAG systems struggle with unstructured data: The discussion pointed out that traditional RAG suffers when dealing with inconsistent formats, complicating information extraction and processing.
- Tools like Unstructured help convert unstructured data into organized formats, allowing for better performance in RAG pipelines.
- Implementation strategy shared for building RAG pipeline: The blog outlines steps for implementing Unstructured RAG involving libraries like FAISS for processing PDFs and creating embeddings.
- It details combining document processing with LLM integration using custom prompts for generating accurate responses based on context.
- Evaluation methods proposed with Athina AI: An optional evaluation using Athina AI is suggested to assess the performance and accuracy of the RAG pipeline, facilitating refinements.
- This evaluation will assist in validating the RAG system and ensuring its effectiveness in real-world applications.
- Clarification on LlamaIndex relevance: A user questioned the connection between the shared blog and LlamaIndex, prompting a member to justify its inclusion as a resource.
- The intent behind sharing was to benefit the general discussion group by providing insights into efficient RAG implementation.
Link mentioned: End-to-End Guide: Implementing Unstructured RAG Systems: Learn the complete process for implementing Unstructured RAG systems. Boost AI performance with this comprehensive Athina AI Hub Original guide!
Eleuther ▷ #general (16 messages🔥):
Hugging Face Checkpoints, Fine-Tuning VLMs, Loss Calculation in Trainer
- Missing Optimizer States from Checkpoints: A user raised concerns about missing optimizer states in Hugging Face checkpoints, suggesting they may need them for their models.
- Another member confirmed that they believe the optimizer states are saved by the checkpointing code.
- Resources for Fine-Tuning VLMs: A discussion centered on the challenge of fine-tuning Vision Language Models (VLMs), noting that specific methods vary by model.
- One user highlighted that the LLaVA codebase has finetuning scripts and is widely used for this purpose.
- Fine-Tuning VLMs Requires Model Specifics: Users discussed various VLMs and noted that many open source options have their own finetuning scripts, such as Qwen-VL and InternVL.
- The conversation highlighted the variability in approaches required for respective VLMs.
- Custom Loss Handling in Hugging Face's Trainer: A user sought advice on customizing loss functions for causal language modeling while utilizing Hugging Face's Trainer, focusing on how to handle padded tokens.
- Another member suggested passing a custom collator to adjust prompt labels, referencing the TRL library for further assistance.
Eleuther ▷ #research (17 messages🔥):
Latency reduction techniques for LLMs, Performance of LLMs in engineering applications, Comparative analysis of LLMs and optimized models, Self-improvement in LLMs, Insights on token prediction objectives
- Latency reduction techniques for LLMs explored: One user inquired about a compendium of techniques to reduce the latency of large language models at inference, particularly targeting optimizations at the CUDA or Triton level.
- There is ongoing interest in understanding effective strategies to streamline processing times for LLMs.
- LLMs making strides in engineering applications: A user shared a link to a publication discussing the application of large language models in engineering, expressing enthusiasm over the topic.
- However, they noted that access limitations on certain research platforms can be frustrating.
- Open models outperforming GPT-4 in function calling: A post highlighted a breakthrough where Outlines' structured generation combined with Phi-3-medium achieved 96.25% accuracy on a function calling task, surpassing GPT-4's performance.
- This achievement reflects the strength of community and open-source collaboration in AI development.
- Self-improvement methods for LLM performance discussed: One paper highlighted the necessity of exploring self-improvement for models that lack extensive human-annotated data, focusing on factors like diversity in responses and external rewards.
- This research aims to enhance the understanding of self-improving iterative methods within complex reasoning tasks.
- Critique of skipping pretraining stages: A user questioned the implications of skipping the pretraining stage with the next token prediction objective and focusing on the entire pretraining dataset directly during training.
- Another user suggested that while combining training tasks could prove effective, it would likely result in a slower training process overall.
- Beating GPT-4 with Open Source: no description found
- Deliberation in Latent Space via Differentiable Cache Augmentation: Techniques enabling large language models (LLMs) to "think more" by generating and attending to intermediate reasoning steps have shown promise in solving complex problems. However, the standa...
- Improving Factuality and Reasoning in Language Models through Multiagent Debate: Large language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their pe...
- Machine learning helps researchers develop perovskite solar cells with near-record efficiency: An international team of scientists has used machine learning to help them develop perovskite solar cells with near-record efficiency. In their paper published in the journal Science, the group descri...
- CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up: Diffusion Transformers (DiT) have become a leading architecture in image generation. However, the quadratic complexity of attention mechanisms, which are responsible for modeling token-wise relationsh...
- Beat GPT-4o at Python by searching with 100 dumb LLaMAs: Scale up smaller open models with search and evaluation to match frontier capabilities.
- B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners: In the absence of extensive human-annotated data for complex reasoning tasks, self-improvement -- where models are trained on their own outputs -- has emerged as a primary method for enhancing perform...
- DeepSeek-V3/DeepSeek_V3.pdf at main · deepseek-ai/DeepSeek-V3: Contribute to deepseek-ai/DeepSeek-V3 development by creating an account on GitHub.
Eleuther â–· #interpretability-general (6 messages):
GPT-2 Token Activations, BOS Token Discussion
- First Token Activations Skyrocket: After conducting tests on gpt2-small, it's noted that the first token's activations are around 3000, significantly higher than the approximate 100 for the rest.
- This observation was made following the subtraction of the mean from the activations.
- BOS Token Confusion in GPT-2: There was a discussion about the BOS token in GPT-2, with one member insisting it doesn't have one since the default tokenizer doesn’t add it.
- However, there's a counterpoint mentioning that GPT-2 does have a BOS token, although the activation norms appear consistent regardless.
Eleuther â–· #multimodal-general (1 messages):
Encoder-Free VLMs, Video VLMs, Encoder Efficiency, Fuyu Model Series, EVE Model
- Encoder-Free VLMs gaining traction: There is increasing interest in encoder-free vision-language models, particularly in the context of video VLMs where encoder efficiency is a major concern.
- A recent NeurIPS paper titled EVE: Encoder-Free Vision-Language Models explores this direction, signaling a shift away from traditional CLIP-style architectures.
- Concerns over Fuyu model performance: Discussion highlighted that while the Fuyu model series aims to address encoder issues, it has not performed particularly well in practice.
- This raises questions about the viability of such architectures in enhancing overall end-to-end quality for video VLMs.
- Request for feedback on encoder-free approaches: A member is seeking comments and recommendations regarding the direction of research into encoder-free VLMs, reflecting a desire to navigate current challenges.
- They emphasize the need for insights into how to improve encoder efficiency and outcome quality in the context of visual media.
Link mentioned: GitHub - baaivision/EVE: [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models: [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models - baaivision/EVE
tinygrad (George Hotz) ▷ #general (8 messages🔥):
Proof in Lean Bounty, BITCAST Const Folding, Matching Engine Performance Bounties, Tinygrad Updates, Performance Optimization
- Seeking Proof in Lean Bounty: A member expressed interest in starting work on a proof in the Lean bounty system and sought assistance.
- Another member recommended checking the newer version of tinygrad notes for relevant information.
- Optimization with BITCAST Const Folding: A question was raised about interest in implementing BITCAST const folding for optimization of compile time.
- A member responded positively, asking for specifics on which directory to focus on for this task.
- Analyzing Matching Engine Performance Bounties: Discussion initiated on performance bounties related to matching engines, linking to the issues page for reference.
- A user provided insights, detailing that the model lower result was already achieving 25ms, raising questions about previous resolutions.
- Clarification on Rewrite Bounty: A member clarified that their focus is on the rewrite section of the matching engine bounties, as per previous discussions.
- They referenced old PRs related to the bounty, suggesting they were outdated.
- tinygrad-notes/20241217_st.md at main · mesozoic-egg/tinygrad-notes: Tutorials on tinygrad. Contribute to mesozoic-egg/tinygrad-notes development by creating an account on GitHub.
- Issues · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! �� - Issues · tinygrad/tinygrad
tinygrad (George Hotz) ▷ #learn-tinygrad (31 messages🔥):
Performance Comparisons, Tinygrad Model Integration, Beam Search Efficiency, GPU Compatibility, Kernel Caching
- Tinygrad Performance vs. PyTorch: Tinygrad takes around 800ms for a forward pass on CUDA, while PyTorch takes about 17ms, but efforts using jitting and beam search are expected to improve performance.
- With jitting and beam search, performance should potentially match or exceed that of PyTorch, as noted by users questioning speed inconsistencies.
- Model Input Handling Issues: A user discovered the need to recreate input tensors in a loop to ensure outputs match, but this significantly slowed down Tinygrad processing.
- Using
tiny_input.clone()
led to attribute errors related to the CUDA allocator, prompting further investigation.
- Using
- Integration of Model Changes: Changes from PR #8309 were merged successfully, improving functionality where cloning inputs is required and matching PyTorch speeds.
- The integration highlights the need for regression tests to ensure stability in performance as changes are made.
- RTX 4070 GPU Discussions: Users discussed specific hardware, confirming an RTX 4070 laptop GPU is being used, along with driver version 535.183.01 and CUDA 12.2.
- Concerns were raised regarding potential issues with open-source kernel drivers, leading to requests for additional system logs.
- Kernel Caching in Beam Search: When querying if kernels from beam search can be reused, it was confirmed that they are cached for efficiency.
- There was further discussion on the possibility of shipping these kernels to other similar machines to avoid repeated searches.
Link mentioned: fish-speech/fish_speech/models/text2semantic/llama.py at main · fishaudio/fish-speech: SOTA Open Source TTS. Contribute to fishaudio/fish-speech development by creating an account on GitHub.
Cohere ▷ #discussions (13 messages🔥):
Christmas Greetings, Re-ranker Pricing Inquiry, AI and ML Learning Journey
- Christmas Cheer Spreads: Multiple members enthusiastically wished each other a Merry Christmas, expressing holiday spirit with greetings and emojis throughout the channel.
- Mapler added an image related to building with Cohere, further contributing to the festive atmosphere.
- Inquiry About Re-ranker Pricing: Mecatron asked about the pricing of the re-ranker, prompting Mapler to provide a link to the Cohere pricing page.
- The pricing details outlined costs for different models, highlighting $2.50 for input and $10.00 for output per 1M tokens for Command R+.
- Newcomer Introduces Themselves: A new member expressed excitement about learning AI and ML, specifically focusing on LLM as a beginner in the field.
- They hope to gain knowledge and excel in their career through engaging with this community, receiving welcoming responses.
- Its Christmas Eve GIF - Christmas Eve Snoopy Santa Claus - Discover & Share GIFs: Click to view the GIF
- Pricing - Affordable Enterprise Generative AI Models: Access our models directly through our API to create scalable production workloads.
Cohere â–· #questions (6 messages):
CMD-R updates, R7B beginnings, HuggingFace finetunes, User feedback
- Future Updates for CMD-R: A member inquired about plans for future updates to CMD-R, highlighting interest in its development.
- Discussion around updates remains speculative, as the community awaits official plans or announcements.
- Curious Start for R7B: Two Ans: A member expressed curiosity about the R7B beginnings, sharing an image with the comment 'two ans'.
- Another member found the situation odd, questioning its frequency and eliciting a light-hearted response.
- Finetuning Command R on HuggingFace: One member speculated about the terms preventing fine-tuning and sharing on HuggingFace, as unusual finetunes of CMD-R seem scarce.
- They pondered whether it was due to restrictions or just a lack of community interest, reflecting on the current state of CMD-R.
- User Reactions on CMD-R: A discussion arose regarding whether members are 'sleeping on CMD-R' due to a lack of recent posts or activity.
- This indicates a potential gap in community engagement and enthusiasm for CMD-R, prompting further inquiry.
Cohere ▷ #cmd-r-bot (15 messages🔥):
LLM University, Command R model, Command R+ performance
- Mastering NLP with LLM University: Cohere offers a learning hub called LLM University, providing expert-led courses and guides to master Enterprise AI, covering NLP and LLMs.
- Explore the full course to build foundational knowledge and practical skills in this area.
- Command R Overview and Capabilities: Command R is a large language model optimised for conversational interaction and capable of handling long context tasks with a 128,000-token context length.
- It excels in retrieval augmented generation (RAG) and supports producing text across 10 languages, emphasizing its strong performance in multi-lingual tasks.
- Command R+'s Enhanced Performance: Command R+ is touted as the most performant large language model, trained on a diverse set of texts to accomplish complex RAG tasks.
- This model is specifically strong in workflows that require multi-step tool use, expanding the capabilities of LLMs in production settings.
Cohere â–· #projects (1 messages):
Voice to Voice chat app, Music Generation using Generative AI, DNN-VAD, NLP projects, ASR project
- Engagement of AI Engineer for Collaboration: A member expressed interest in collaboration, introducing themselves as an AI engineer with experience in DNN-VAD, NLP, and ASR projects.
- They highlighted their recent work on a Voice to Voice chat app and Music Generation from text prompts using the stereo-melody-large model, stating, 'I would like to work with you.'
- Seasonal Greetings: The same member greeted others with a warm, 'Merry Christmas!' signaling a friendly atmosphere in the discussion.
- This greeting was a light-hearted addition to their professional introduction.
Modular (Mojo 🔥) ▷ #general (6 messages):
io_uring networking, Mojo swag, Modular merchandise
- Exploring io_uring for Networking: A member asked for examples that demonstrate how io_uring can be utilized in networking, expressing a desire to learn more despite their limited familiarity.
- Start with the man pages was a suggestion made to guide the inquiry into io_uring.
- Excitement Over Mojo Swag: A member shared their gratitude for receiving Mojo swag, thanking the team for facilitating the delivery to a remote location.
- They included a photo to showcase their new gear, which sparked enthusiasm from others in the chat.
- Modular Merch is Hot: Members discussed the appealing aspects of Modular’s merch, highlighting how it is likely to be quite popular.
- Comments about the quality of the shirts and how the sticker 'goes hard' indicated positive perceptions of the products.
Modular (Mojo 🔥) ▷ #mojo (19 messages🔥):
Importing String Module Issues, StringRef and Crash Causes, Testing for EOF in Read Until Delimiter, Concerns About Copyable Traits
- Importing String Module Causes Errors: A member encountered errors when changing the import from
import collections.string
toimport src.collections.string
, indicating issues with the module's path in their setup.- Another member noted that the
src
part should be omitted in the import, as it did not appear in standard examples.
- Another member noted that the
- StringRef() Needs Length Check: After investigating a crash, a member suggested that
StringRef()
should verify that the received length is not negative, as it caused a crash when passing a negative length tomemcpy
.- A community member acknowledged that
StringRef
is unsafe, recommending the use ofStringSlice
instead.
- A community member acknowledged that
- Testing for EOF in Read Until Delimiter: A member confirmed they added a test to ensure
read_until_delimiter
raises EOF, linking to their GitHub commits documenting this work.- The commitment can be viewed here.
- Concerns Over Copyable Traits Design: A member expressed concerns about the design of the
Copyable
andExplicitlyCopyable
traits, which are shared on the Modular forum.- This discussion could lead to potential design changes as the community assesses the current implementation.
Link mentioned: Commits · mahiro21h/mojo: The Mojo Programming Language. Contribute to mahiro21h/mojo development by creating an account on GitHub.
Modular (Mojo 🔥) ▷ #max (2 messages):
Modular Stack Kernel, MAX vs XLA Compile Times
- Inquiry about Modular Stack Kernel: A member asked whether there is a dedicated kernel for the modular stack.
- This inquiry highlights the ongoing interest in optimizing kernel support within modular implementations.
- MAX positioned as XLA Competitor: One member suggested that MAX may serve as a competitor to XLA, particularly criticizing its compile times.
- Bad compile times with JAX are the responsibility of XLA was a point emphasized during the discussion on performance optimizations.
DSPy â–· #show-and-tell (1 messages):
PyN8N, DSLModel, AI Workflow Creation
- Trouble Loading PyN8N Site: A user reported that a required part of the PyN8N site couldn’t load, potentially due to browser issues.
- They suggested checking the connection, disabling ad blockers, or trying a different browser to resolve the issue.
- DSpy Support via DSLModel: Discussion highlighted that DSpy provides support through DSLModel, enhancing functionality.
- The integration allows users to leverage advanced features for better performance.
- AI Helps Create Node Workflows: It was noted that the PyN8N client enables users to utilize AI for generating nodes and workflows.
- The README is described as aspirational, showcasing the potential of the tool while the client itself is functional.
Link mentioned: no title found: no description found
DSPy ▷ #general (12 messages🔥):
NotebookLM inline sourcing, Jekyll glossary script, Typing.TypedDict usage, Pydantic for output fields design
- Exploring NotebookLM Inline Sourcing: A member inquired about how NotebookLM inline sourcing functions, indicating interest in its implementation.
- No additional information was provided on the inquiry.
- Script for Glossary Generation in Jekyll: A member shared a Jekyll script that generates a glossary of key terms, integrating DSPy for LLM interactions.
- The output, which includes detailed entries on terms like Artificial General Intelligence, facilitates further tuning and editing.
- Typing.TypedDict Discovery: A member remarked on discovering
typing.TypedDict
, indicating a learning moment regarding type hinting in Python.- Another member commented on the challenges it presents, highlighting the intricacy involved.
- Design Considerations for Output Fields: A discussion ensued about the design of output fields in the context of returning multiple instances in an array, questioning the elegance of the current structure.
- A suggestion was made to utilize pydantic.BaseModel with descriptions for each field to improve this output design.
- Finalizing Index with Jekyll Script: The member iterated that the script for generating a glossary works effectively, allowing for a near-complete index that can be finalized manually.
- They also raised potential design questions about the long ugly description parameter used in the output field.
Link mentioned: A script to generate a glossary of key terms from your Jekyll posts. We're using DSPy to handle LLM interactions; it helps with boilerplate prompt context and parsing responses into Pydantic objects. To run this, put this script in a folder named 'scripts' (or whatever) in your Jekyll site directory. Then plug in your Anthropic API key (or point DSPy to the LLM endpoint of your choice). It will output a YAML file named 'glossary.yaml' to your '_data' directory.: A script to generate a glossary of key terms from your Jekyll posts. We're using DSPy to handle LLM interactions; it helps with boilerplate prompt context and parsing responses into Pydantic o...
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (8 messages🔥):
Certificate Distribution, Certificate Declaration Form, Next Course Start Date
- Certificate Distribution Timeline: According to a member's post, certificates will be distributed by the end of January. For more details, see the original post here.
- Missing Certificate Declaration Form: A member inquired about receiving a certificate without filling out the certificate declaration form, having completed all other requirements. Another member clarified that unfortunately, if the form is not submitted, they will not receive a certificate.
- Questions about Upcoming Course: Members are expressing interest in the timeline for the next course, following the discussion about certification. It's noted there will be another course in the spring.
OpenInterpreter â–· #general (7 messages):
Open Interpreter API, OCR functionality, Desktop version release, Voice to Voice chat app, QvQ with Open-Interpreter
- Open Interpreter API demonstrates precision: The Open Interpreter API provides capabilities to locate visual controls with single-pixel precision by using natural language queries and display representations.
- Users have shared Python examples for utilizing the API in their projects.
- OCR functionality faces issues: A member mentioned that the API is designed to use OCR to identify icons and text on screens, but reported that it appears to be broken.
- Another user confirmed they have not received a successful response yet.
- Inquiries about desktop version release: One user asked when the desktop version of the Open Interpreter would be released.
- The question reflects interest in broader accessibility for end users.
- AI engineer seeks collaboration: A member introduced themselves as an AI engineer with experience in DNN-VAD, NLP, and ASR, and expressed interest in collaboration on projects involving Generative AI.
- They highlighted their recent work on a Voice to Voice chat app and Music Generation from text prompts.
- Discussion on QvQ and Open-Interpreter OS mode: A user queried how QvQ would function when integrated with Open-Interpreter in OS mode.
- This indicates ongoing discussions about interoperability and functionality within the community.
Link mentioned: no title found: no description found
Nomic.ai (GPT4All) â–· #general (3 messages):
UI Features, Copying Code, Keyboard Shortcuts
- No dedicated copy button for AI-generated code: A member noticed the absence of a dedicated 'copy' button for AI-generated code within the chat screen UI, seeking clarification if others share the same observation.
- They expressed gratitude for any assistance provided regarding this matter.
- Cut and paste functionality issues: Another member confirmed that traditional mouse cut and paste functions do not work in the chat UI or configuration pages, but Control-C and Control-V are functional.
- This clarification aims to assist those having difficulties with the copying process.
- Inquiry about new template use: A member inquired, in French, if anyone had successfully written using the new template.
- This question indicates a community interest in exploring new features available.
LAION â–· #general (2 messages):
TTS dataset creation, Audio segmentation, Using Whisper for transcription
- Seeking Advice for TTS Dataset from Long Audio: A member is looking for advice on building a TTS dataset using several long audio files, each over an hour long, aiming to segment and transcribe them.
- They specifically want to know how to efficiently split these samples and what tools or methods can be used for this task.
- Using Whisper for Sentence Detection: Another member suggested that Whisper can detect sentences, proposing it as a tool to split audio files at sentence lengths.
- This could potentially streamline the process of preparing audio for transcription in TTS applications.
MLOps @Chipro â–· #general-ml (1 messages):
ML ops frameworks, HPC environments, Guild AI stability, DIY ops framework
- Seeking ML Ops Frameworks for HPC: A member is looking for ML ops frameworks suitable for HPC environments, emphasizing the need for stability and cost-effectiveness.
- They mentioned Guild AI as a potential option but questioned its stability, expressing a preference for lightweight, self-hosted solutions rather than SaaS.
- Challenges with Server Management: The member implied that setting up a server for hosting could be too labor-intensive, which they wish to avoid.
- They expressed a willingness to write a simple ops framework themselves instead of managing a server.