Today's Top Episodes

#2422 - Jensen Huang

Dive into the mind of Nvidia's visionary leader, Jensen Huang, as he unpacks AI's revolution, the future of work, and his incredible journey to the top.

Viewing Podcast: Podcast
AI
Arts
Business
Crypto
Finance
Health
History
Interviews
Investing
Macro
Misc
News
Politics
Product
Programming
Science
Social
Startups
Technology
VC
From Code to Cash: How André Arko Builds Better Tools and Gets Paid for Open Source

From Code to Cash: How André Arko Builds Better Tools and Gets Paid for Open Source

Duration: 00:40:38
November 13, 2025
  • The new RV tool aims to simplify Ruby development by offering a faster, single-solution approach to dependency management, inspired by Python's UV tool.
  • Ruby's relevance in modern web development is still significant with major companies like GitHub, Figma, and Stripe relying on it, despite perceptions of its declining popularity.
  • Spinel Cooperative is exploring a new model for funding open-source projects by offering paid services and expertise to corporations, moving away from traditional non-profit donations.
Cloud Repatriation: Because Conspiracy Theories Are Cheaper with Deana Solis

Cloud Repatriation: Because Conspiracy Theories Are Cheaper with Deana Solis

Duration: 00:40:01
October 16, 2025
  • The discussion highlights the challenges and nuances of cloud economics, emphasizing the need for engineers and finance professionals to understand each other's language to effectively manage cloud spending.
  • There's a critical examination of AI's role in DevOps, acknowledging it as both a powerful tool for efficiency and a potential threat when used without deep understanding or accountability.
  • The conversation delves into the evolving nature of IT careers, stressing the importance of continuous learning, adaptation, and building community in the face of rapid technological change.
Five Slot Machines at Once: Chris Weichel on the Future of Software Development

Five Slot Machines at Once: Chris Weichel on the Future of Software Development

Duration: 00:40:52
October 2, 2025
  • ONA's core value proposition is providing isolated development environments for both humans and AI agents, enabling safer and more scalable software development.
  • The podcast highlights a three-wave evolution of coding, from handcrafted code to AI-assisted editors to autonomous AI agents writing and modifying code.
  • A key challenge for software engineers is adapting to working with AI agents by learning to decompose problems effectively and manage multiple parallel tasks, suggesting a need for new interfaces and workflows.
From Aurora to PlanetScale: Intercom’s Database Evolution with Brian Scanlan

From Aurora to PlanetScale: Intercom’s Database Evolution with Brian Scanlan

Duration: 00:43:33
September 18, 2025
  • Intercom reoriented its entire company around generative AI after seeing the potential of GPT-3.5 to improve their chatbot offerings for customer support.
  • Intercom replaced their Aurora database setup with Planet Scale's managed Vitess offering in order to solve problems like sharding, connection pooling, and fast failovers, citing the need for a more hands-on partner than AWS could provide.
  • Intercom's on-call system relies on paid volunteers rather than conscripted employees, fostering a culture where on-call work is valued, manageable, and even seen as a learning opportunity.
Conversations at the Intersection of AI and Code with Harjot Gill

Conversations at the Intersection of AI and Code with Harjot Gill

Duration: 00:33:41
September 4, 2025
  • Code Rabbit automates code reviews with AI to alleviate bottlenecks, especially with the rise of AI-generated code, by identifying issues and offering context-aware feedback.

  • Unlike traditional static analysis tools, Code Rabbit uses a multi-pass agentic system and reasoning models within sandboxed environments to navigate and understand complex codebases, minimizing false positives.

  • Code Rabbit offers a generous free tier with predictable pricing and predictable costs for paid teams by focusing on team products and carefully engineering the system to avoid negative gross margins, while also democratizing access to AI-powered code reviews.

The Transformation Trap: Why Software Modernization Is Harder Than It Looks

The Transformation Trap: Why Software Modernization Is Harder Than It Looks

Duration: 00:33:26
August 21, 2025
  • Modern's approach to software modernization uses deterministic recipes to transform code, leveraging lossless semantic trees and enabling LLMs to analyze and refactor code at scale.
  • Cultural differences impact modernization efforts, but the need for solutions that address inconsistencies in both intentionally diverse (like Netflix) and unintentionally diverse (like JPMorgan Chase) organizations highlights the universality of the problem.
  • While AI authorship tools can increase developer productivity, the speaker expresses skepticism that they alone can address the challenges of large-scale migrations, particularly in complex, open-ended problem spaces.
AI's Security Crisis: Why Your Assistant Might Betray You

AI's Security Crisis: Why Your Assistant Might Betray You

Duration: 01:05:01
August 7, 2025
  • The speaker underscores the significance of blogging as a powerful tool for influencing the AI space, emphasizing quality of readership over quantity.
  • They highlight the security risks associated with Large Language Models (LLMs) with access to private data, untrusted inputs, and data exfiltration capabilities, due to the models' gullibility.
  • The discussion covers the potential of LLMs to democratize coding but also suggests that expertise remains crucial for navigating complexities and ensuring responsible use.
Betting on AI: The Delusion Driving Big Tech

Betting on AI: The Delusion Driving Big Tech

Duration: 01:08:33
July 24, 2025
  • The podcast explores the unsustainability of current AI business models, particularly the fact that companies are not generating significant revenue and are relying on "wash trading" tactics.
  • A key point of discussion is Microsoft's investment in OpenAI, with the suggestion that Microsoft may have anticipated OpenAI's failure and strategically positioned itself to acquire their IP, highlighting a potential scandal involving Azure revenues.
  • The podcast questions the long-term viability of Nvidia's growth, as it hinges on continued exponential sales of GPUs for AI training, a need that may diminish if alternative training paradigms or TPU adoption become more prevalent.
Reliable Software by Default with Jeremy Edberg

Reliable Software by Default with Jeremy Edberg

Duration: 00:35:54
July 10, 2025
  • The company DBOS is focused on building reliable software by default through their "transact" open-source library, which allows developers to checkpoint software into a database for easier crash recovery and rollbacks.
  • DBOS is designed to address the challenges of serverless development, such as poor local testing and complex configurations, by allowing developers to build and deploy entire reliable programs, including cron jobs and queues, within a single file.
  • A key future direction for DBOS involves leveraging the metadata generated from code checkpointing to enable features like autonomous testing and intrusion detection, positioning the company as an operations expert that allows developers to focus on writing code.
See Why GenAI Workloads Are Breaking Observability with Wayne Segar

See Why GenAI Workloads Are Breaking Observability with Wayne Segar

Duration: 00:33:15
June 26, 2025
  • Dinatrace emphasizes helping customers understand the health of their systems and automating fixes to prevent negative user experiences, rather than solely focusing on AI-driven solutions.
  • Enterprises are increasingly creating AI centers of excellence to navigate compliance needs and strategically implement AI projects, often starting with internal applications to gain learnings before external deployment.
  • Dinatrace prioritizes providing value by focusing on data in context with a real-time topology model showing dependencies, enabling customers to understand how AI infrastructure interacts with existing systems.