We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/mpnikhil/lenny-rag-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
Kevin Weil.json•44.1 KiB
{
"episode": {
"guest": "Kevin Weil",
"expertise_tags": [
"Chief Product Officer",
"OpenAI",
"AI/LLM Product Strategy",
"Product Management",
"Model Evaluation",
"AI Product Development",
"Organizational Leadership"
],
"summary": "Kevin Weil, CPO of OpenAI, discusses building products in the rapidly evolving AI landscape where capabilities improve exponentially every few months. He covers how OpenAI operates with bottoms-up empowered teams rather than rigid roadmaps, the critical importance of writing evals (model tests) for AI product development, and why fine-tuned models will become essential across industries. Weil reflects on his career at Facebook, Instagram, and Twitter, shares lessons from the failed Libra cryptocurrency project, and offers perspective on AI's societal impact, education transformation, and the future of creative work with AI assistance.",
"key_frameworks": [
"Model maximalism: build for capabilities that are almost there, models will catch up",
"Iterative deployment: ship early and iterate in public rather than perfecting in private",
"Evals as core product skill: custom evaluations determine product viability based on model accuracy rates",
"Bottoms-up empowerment: lightweight roadmapping with team autonomy over top-down control",
"Thinking in human analogies: reason about LLM behavior by considering equivalent human actions",
"Ensemble approach: breaking problems into specific tasks solved by different specialized models",
"Fine-tuning for specificity: company and use-case-specific models outperform generic base models",
"Poor man's fine-tuning: providing examples in prompts to guide model behavior without formal fine-tuning"
]
},
"topics": [
{
"id": "topic_1",
"title": "AI Models Improve Exponentially Every Few Months",
"summary": "Kevin explains that current AI models are the worst you'll ever use, with capabilities doubling or improving 10x yearly rather than the slower Moore's Law progression of hardware. This fundamental difference requires rethinking how products are built and what features can be attempted.",
"timestamp_start": "00:00:00",
"timestamp_end": "00:02:24",
"line_start": 1,
"line_end": 19
},
{
"id": "topic_2",
"title": "Kevin's Career Journey and Recruiting to OpenAI",
"summary": "Kevin discusses his path from Instagram and Twitter to OpenAI, including the emotional recruiting process where Sam Altman invited him to join. He describes the nine-day anxiety waiting for a response after interviews, drawing parallels to dating and the importance of not jumping to conclusions.",
"timestamp_start": "00:08:53",
"timestamp_end": "00:16:19",
"line_start": 61,
"line_end": 125
},
{
"id": "topic_3",
"title": "Fundamental Differences Working at OpenAI vs Traditional Tech",
"summary": "Kevin explains that unlike traditional companies where underlying technology is fixed, at OpenAI the technology foundation changes every two months. This requires completely different thinking about product strategy, feature planning, and success metrics compared to database or infrastructure-based companies.",
"timestamp_start": "00:16:19",
"timestamp_end": "00:18:45",
"line_start": 125,
"line_end": 135
},
{
"id": "topic_4",
"title": "Evals as Core Skill for AI Product Builders",
"summary": "Kevin introduces evals as unit tests for AI models, explaining they measure model performance on specific tasks. He emphasizes that evals determine product viability: whether a model achieves 60%, 95%, or 99.5% accuracy dramatically changes what product can be built. Deep Research demonstrates using evals to continuously improve model performance for specific use cases.",
"timestamp_start": "00:18:45",
"timestamp_end": "00:24:40",
"line_start": 136,
"line_end": 177
},
{
"id": "topic_5",
"title": "OpenAI's Strategy: Opportunities for Startup Builders",
"summary": "Kevin addresses founder concerns about OpenAI competition, citing Evan Williams' insight that more smart people exist outside company walls than inside. OpenAI focuses on building great APIs rather than capturing all use cases. Industry-specific, company-specific, and vertical-specific data behind company walls create immense opportunities for specialized AI products that foundation models won't build.",
"timestamp_start": "00:24:40",
"timestamp_end": "00:26:24",
"line_start": 177,
"line_end": 186
},
{
"id": "topic_6",
"title": "Shipping Fast with Lightweight Roadmapping and Bottoms-Up Teams",
"summary": "Kevin explains OpenAI's philosophy of lightweight quarterly roadmaps (following Eisenhower's 'plans are useless, planning is helpful') combined with empowered bottom-up teams. Rather than rigid three-month plans, they check dependencies, align thematically, then let teams move fast and iterate. No one should be blocked waiting for executive review if leadership is unavailable.",
"timestamp_start": "00:26:24",
"timestamp_end": "00:30:03",
"line_start": 186,
"line_end": 215
},
{
"id": "topic_7",
"title": "Iterative Deployment and Model Maximalism Philosophy",
"summary": "Kevin outlines two core philosophies: iterative deployment (shipping early and learning in public rather than perfecting privately) and model maximalism (not over-engineering scaffolding around current model limitations since better models arrive soon). Teams should build products on the edge of current capabilities knowing models will catch up in months.",
"timestamp_start": "00:30:03",
"timestamp_end": "00:32:19",
"line_start": 215,
"line_end": 230
},
{
"id": "topic_8",
"title": "Model Competition and OpenAI's Competitive Position",
"summary": "Kevin acknowledges intense model competition from Anthropic, Google, and others, noting that OpenAI's previous 12-month lead has compressed. Different providers excel at different tasks (Anthropic at coding, for instance), spurring competitive improvement. Competition benefits consumers, developers, and businesses. Models are getting smarter, faster, cheaper, and safer with each iteration.",
"timestamp_start": "00:32:50",
"timestamp_end": "00:36:04",
"line_start": 230,
"line_end": 250
},
{
"id": "topic_9",
"title": "Designing AI Products by Thinking Like Humans",
"summary": "Kevin shares that reasoning about LLM behavior as if they were humans often works surprisingly well. Example: when building the reasoning model with thinking time, instead of silence, the model should provide updates like a human would during deliberation. Similarly, ensemble approaches mirror human brainstorming. This human-centric thinking informs better product design.",
"timestamp_start": "00:36:04",
"timestamp_end": "00:39:27",
"line_start": 250,
"line_end": 267
},
{
"id": "topic_10",
"title": "Chat as the Ideal Interface for AI",
"summary": "Kevin argues chat is an underrated interface because it mirrors how humans naturally communicate across all intelligence levels. The versatility of unstructured communication maximizes bandwidth. While specialized interfaces work for high-volume prescribed tasks, chat remains essential as the catch-all baseline for anything users want to express to models.",
"timestamp_start": "00:39:27",
"timestamp_end": "00:43:51",
"line_start": 267,
"line_end": 291
},
{
"id": "topic_11",
"title": "Research and Product Team Integration at OpenAI",
"summary": "Kevin explains OpenAI evolved from pure research company (ChatGPT was a research preview) to balanced research-product company. Best products require engineering, product, design, and research working as integrated teams with continuous feedback loops. Fine-tuning models requires understanding specific use cases and building evals iteratively—not handing finished models to product teams.",
"timestamp_start": "00:45:49",
"timestamp_end": "00:49:36",
"line_start": 313,
"line_end": 344
},
{
"id": "topic_12",
"title": "Hiring PMs: High Agency and Ambiguity Tolerance",
"summary": "Kevin describes ideal OpenAI PM traits: high agency (solving problems without waiting for permission), comfort with massive ambiguity, ability to lead through influence, and strong emotional intelligence. Traditional early-career PM paths don't work where strategy is ill-formed and everyone lacks time. PMs must be decisive when needed while deferring to teams appropriately.",
"timestamp_start": "00:49:36",
"timestamp_end": "00:53:07",
"line_start": 344,
"line_end": 363
},
{
"id": "topic_13",
"title": "AI Adoption in Daily Product Work and Vibe Coding",
"summary": "Kevin shares that despite using AI extensively (ChatGPT for docs, GPT-based specs, evals), product work still looks familiar to his younger self. He expects this to change dramatically. 'Vibe coding' with tools like Cursor and Windsurf—letting models generate code while providing iterative feedback—demonstrates the potential. Chief People Officer Julia exemplifies this by vibe coding internal tools.",
"timestamp_start": "00:53:52",
"timestamp_end": "00:57:05",
"line_start": 365,
"line_end": 387
},
{
"id": "topic_14",
"title": "Future Product Teams: Researchers Built into Every Team",
"summary": "Kevin predicts researchers and ML engineers will become standard on every product team across industries as fine-tuned models become core to workflows. Teams will use custom evals, break problems into specific tasks solved by specialized models, use model ensembles, and tailor models with company-specific data. This mirrors how companies are ensembles of individuals fine-tuned through career experience.",
"timestamp_start": "00:57:36",
"timestamp_end": "01:01:15",
"line_start": 395,
"line_end": 412
},
{
"id": "topic_15",
"title": "Model Ensembles and Specialization in Internal Stack",
"summary": "Kevin explains OpenAI's internal approach of using different models for different purposes: reasoning models for complex problems, O-series for latency-insensitive work, GPT-4 mini for quick checks, fine-tuned models for specific tasks. Customer support demonstrates this: fine-tuned models handle routine questions, suggest answers for complex ones, and learn from human corrections. Ensembles vastly outperform single generic models.",
"timestamp_start": "01:01:15",
"timestamp_end": "01:03:20",
"line_start": 412,
"line_end": 420
},
{
"id": "topic_16",
"title": "Teaching Kids for an AI Future",
"summary": "Kevin emphasizes curiosity, independence, self-confidence, and thinking skills over specific technical skills. His kids (ages 10 and 8) are AI-native and fluent with ChatGPT. Rather than predicting which jobs remain, he focuses on foundational capabilities that work across future scenarios. Coding skills may be relevant long-term but aren't the core focus.",
"timestamp_start": "01:04:47",
"timestamp_end": "01:06:16",
"line_start": 440,
"line_end": 448
},
{
"id": "topic_17",
"title": "AI Personalized Tutoring as Transformative Opportunity",
"summary": "Kevin identifies AI personalized tutoring as potentially the most important AI application, surprising that no major 2-billion-user product exists despite models being capable and studies showing multiple standard deviation improvements. ChatGPT is free and accessible globally, yet adoption remains limited. He sees massive opportunity to transform education globally, especially for underserved populations.",
"timestamp_start": "01:06:16",
"timestamp_end": "01:08:27",
"line_start": 446,
"line_end": 467
},
{
"id": "topic_18",
"title": "Technology Optimism and Managing Transition",
"summary": "Kevin expresses strong technology optimism, noting 200+ years of technology driving advancement in economics, geopolitics, quality of life, and longevity. While acknowledging temporary dislocations and individual impacts matter, he emphasizes the long-term benefit. OpenAI works with administration and policy on education and reskilling, with ChatGPT serving as a reskilling tool.",
"timestamp_start": "01:08:27",
"timestamp_end": "01:10:23",
"line_start": 455,
"line_end": 471
},
{
"id": "topic_19",
"title": "AI Enhancing Creativity Rather Than Replacing It",
"summary": "Kevin argues AI enhances rather than replaces creativity. ImageGen enables non-artists to express creative ideas. Sora allows directors to explore 50 variations of a scene rather than commissioning expensive versions. Creativity still requires human ingenuity and intent; AI is an exploration and refinement tool. He's optimistic about AI-assisted creative work across domains.",
"timestamp_start": "01:10:23",
"timestamp_end": "01:14:38",
"line_start": 470,
"line_end": 494
},
{
"id": "topic_20",
"title": "Rapid Model Improvement Pace and Future Capabilities",
"summary": "Kevin highlights that model capabilities are increasing at massive pace with each O-series model arriving every 3-4 months. Costs have dropped 100x in years. Models are getting smarter, faster, cheaper, and safer, hallucinating less each iteration. This improvement rate (10x yearly) vastly exceeds Moore's Law. Many capabilities still locked in current models will unlock soon.",
"timestamp_start": "01:14:38",
"timestamp_end": "01:17:08",
"line_start": 494,
"line_end": 524
},
{
"id": "topic_21",
"title": "Libra Cryptocurrency Project: Biggest Career Disappointment",
"summary": "Kevin co-led Libra (later Novi) at Facebook with David Marcus, attempting to enable free instant money transfers via WhatsApp to solve remittance costs (people paying 20% fees). The project launched too much at once (new blockchain, currency basket, WhatsApp integration) during Facebook's reputation nadir. While the underlying tech lives on in Aptos and Movement, Kevin regrets not introducing changes more incrementally.",
"timestamp_start": "01:18:26",
"timestamp_end": "01:21:49",
"line_start": 548,
"line_end": 579
},
{
"id": "topic_22",
"title": "Favorite Books, Movies, and Products",
"summary": "Kevin recommends Co-Intelligence by Ethan Mollick on AI application, The Accidental Superpower by Peter Zeihan on geopolitics, and Cable Cowboy biography of dealmaker John Malone. He hasn't watched recent TV but wants to see Amazon's Wheel of Time series. Top Gun 2 exemplified American pride. Favorite products: vibe coding with Windsurf and Waymo autonomous rides.",
"timestamp_start": "01:22:12",
"timestamp_end": "01:24:24",
"line_start": 593,
"line_end": 633
},
{
"id": "topic_23",
"title": "Life Motto: Good Work Consistently Over Long Periods",
"summary": "Kevin's favorite philosophy comes from Mark Zuckerberg's earnings call answer: 'Sometimes it's not any one thing, it's just good work consistently over a long period of time.' He applies this to ultra marathons and career—showing up daily to do good work, improving slightly each day, with compound gains over years. People often seek silver bullets but miss the power of consistency.",
"timestamp_start": "01:24:47",
"timestamp_end": "01:26:15",
"line_start": 647,
"line_end": 659
},
{
"id": "topic_24",
"title": "Prompting Tips and Poor Man's Fine-Tuning",
"summary": "Kevin advises against expecting perfect prompt engineering skills—sharp edges should be smoothed by better AI, not user mastery. Current workaround: poor man's fine-tuning by including examples in prompts ('Here's example X with good answer Y'). Models respond to role-playing ('You are Einstein answering physics') and importance framing ('This matters greatly to my career'). Over time, less prompt engineering should be required.",
"timestamp_start": "01:26:36",
"timestamp_end": "01:29:42",
"line_start": 662,
"line_end": 692
}
],
"insights": [
{
"id": "i1",
"text": "The AI models you're using today are the worst AI models you will ever use for the rest of your life.",
"context": "Kevin opens the podcast with this foundational mindset that should shape how builders approach AI product development.",
"topic_id": "topic_1",
"line_start": 2,
"line_end": 2
},
{
"id": "i2",
"text": "Every two months, computers can do something they've never been able to do before and you need to completely think differently about what you're doing.",
"context": "This captures the core difference between building with AI versus traditional technology stacks.",
"topic_id": "topic_1",
"line_start": 2,
"line_end": 2
},
{
"id": "i3",
"text": "If you're building and the product that you're building is kind of right on the edge of the capabilities of the models, keep going because you're doing something right. Give it another couple months and the models are going to be great.",
"context": "Kevin's model maximalism philosophy: builders should push current model boundaries knowing improvements arrive within months.",
"topic_id": "topic_7",
"line_start": 8,
"line_end": 8
},
{
"id": "i4",
"text": "Everywhere I've ever worked before this, you kind of know what technology you're building on, but that's not true at all with AI.",
"context": "Core insight about the unique challenge of building products when the foundation technology changes every few months.",
"topic_id": "topic_3",
"line_start": 2,
"line_end": 2
},
{
"id": "i5",
"text": "If the model gets it right 60% of the time, you build a very different product than if the model gets it right 95% of the time versus if the model gets it right 99.5% of the time.",
"context": "Evals directly determine product architecture; accuracy rates shape feature design fundamentally.",
"topic_id": "topic_4",
"line_start": 134,
"line_end": 134
},
{
"id": "i6",
"text": "These models are really smart, you need to still teach them things if the data's not in their training set, and there's a huge amount of use cases that are not going to be in their training set because they're relevant to one industry or one company.",
"context": "Models need company-specific and use-case-specific fine-tuning to excel; raw base models have limitations.",
"topic_id": "topic_4",
"line_start": 176,
"line_end": 176
},
{
"id": "i7",
"text": "No matter how big your company gets, no matter how incredible the people are, there are way more smart people outside your walls than there are inside your walls.",
"context": "Evan Williams' insight that explains why OpenAI focuses on APIs rather than trying to serve all use cases directly.",
"topic_id": "topic_5",
"line_start": 182,
"line_end": 182
},
{
"id": "i8",
"text": "Plans are useless. Planning is helpful.",
"context": "Eisenhower quote Kevin uses to explain why OpenAI does lightweight quarterly roadmaps despite technology changing constantly.",
"topic_id": "topic_6",
"line_start": 191,
"line_end": 191
},
{
"id": "i9",
"text": "I would never want us to be blocked on launching something, waiting for a review with me or Sam, if we can't get there. If I'm traveling or Sam's busy or whatever, that's a bad reason for us not to ship.",
"context": "Operational philosophy prioritizing team empowerment and execution speed over approval rituals.",
"topic_id": "topic_6",
"line_start": 218,
"line_end": 218
},
{
"id": "i10",
"text": "We don't spend that much time building scaffolding around the parts that don't match that because our general mindset is in two months there's going to be a better model and it's going to blow away whatever the current set of limitations are.",
"context": "Model maximalism: avoid over-engineering workarounds for current model limitations since they'll soon be obsolete.",
"topic_id": "topic_7",
"line_start": 224,
"line_end": 224
},
{
"id": "i11",
"text": "It used to be that OpenAI had this massive model lead, 12 months or something ahead of everybody else. That's not true anymore. I like to think we still have a lead, I'd argue that we do, but it's certainly not a massive one.",
"context": "Honest assessment of competitive dynamics; lead has compressed from 12 months to smaller margin.",
"topic_id": "topic_8",
"line_start": 239,
"line_end": 239
},
{
"id": "i12",
"text": "You can often reason about it the way you would reason about another human and it works.",
"context": "Counterintuitive insight that LLM behavior can be understood through human analogies, informing better product design.",
"topic_id": "topic_9",
"line_start": 254,
"line_end": 254
},
{
"id": "i13",
"text": "If you asked me something that I needed to think for 20 seconds to answer, I wouldn't just go mute and not say anything. So we shouldn't do that either.",
"context": "Example of using human behavior as design guidance: providing updates during thinking rather than silence.",
"topic_id": "topic_9",
"line_start": 263,
"line_end": 263
},
{
"id": "i14",
"text": "Chat is an amazing interface because it's so versatile. It's the way we talk. The same way you used to have to get deep into MySQL storage engines, you shouldn't need to care about minute details of prompting.",
"context": "Contrarian take defending chat as ideal AI interface; most people think something better will emerge.",
"topic_id": "topic_10",
"line_start": 280,
"line_end": 284
},
{
"id": "i15",
"text": "You don't want that very open-ended, flexible communication medium, it may be that we're speaking and the model's speaking back to me, but you still want the very lowest common denominator, no restrictions way of interacting.",
"context": "Chat remains essential as catch-all baseline even when specialized interfaces handle specific tasks.",
"topic_id": "topic_10",
"line_start": 284,
"line_end": 284
},
{
"id": "i16",
"text": "If you treat those things separately and the researchers go do amazing things and build models and then they get to some state and then the product and engineering teams go take them, we're effectively just an API consumer of our own models.",
"context": "Critical insight about research-product separation failing; best products require integrated teams from the start.",
"topic_id": "topic_11",
"line_start": 332,
"line_end": 332
},
{
"id": "i17",
"text": "I think it's a good thing when you have a PM that is working with maybe slightly too many engineers because it means they're not going to get in and micromanage.",
"context": "Counter-intuitive management insight: slightly overloaded PMs prevent micromanagement and empower engineers.",
"topic_id": "topic_12",
"line_start": 341,
"line_end": 341
},
{
"id": "i18",
"text": "High agency is something that we really look for, people that are not going to come in and wait for everyone else to allow them to do something, they're just going to see a problem and go do it.",
"context": "Key PM hiring criterion at OpenAI; required mindset for ambiguous, rapidly-changing environment.",
"topic_id": "topic_12",
"line_start": 350,
"line_end": 350
},
{
"id": "i19",
"text": "I'm still sort of disappointed by us, and I really mean me, in... if I were to just teleport my five-year-old self leading product at some other company into my day job, I would recognize it still.",
"context": "Kevin's self-critique: product teams should be using AI dramatically more than they currently are.",
"topic_id": "topic_13",
"line_start": 368,
"line_end": 368
},
{
"id": "i20",
"text": "Why shouldn't we be vibe coding demos right, left and center? Instead of showing stuff in Figma, we should be showing prototypes.",
"context": "Practical application: rapid AI-assisted prototyping should replace design mocks for exploration.",
"topic_id": "topic_13",
"line_start": 371,
"line_end": 371
},
{
"id": "i21",
"text": "There's no question that that's the future. Models are going to be everywhere just like transistors are everywhere, AI is going to be just a part of the fabric of everything we do.",
"context": "Vision of AI ubiquity: fine-tuned models will be as embedded in products as transistors in electronics.",
"topic_id": "topic_14",
"line_start": 398,
"line_end": 398
},
{
"id": "i22",
"text": "A company is arguably an ensemble of models that have all been fine tuned based on what we studied in college and what we have learned over the course of our careers.",
"context": "Mental model analogy: companies structure like model ensembles with different specialized components.",
"topic_id": "topic_15",
"line_start": 419,
"line_end": 419
},
{
"id": "i23",
"text": "Some of these places, you want a little bit more reasoning, is not super latency sensitive, so you'll use one of our O series models. In other places, you want a quick check, and you're fine to use four oh mini.",
"context": "Practical ensemble strategy: different models for different requirements rather than one-size-fits-all.",
"topic_id": "topic_15",
"line_start": 416,
"line_end": 416
},
{
"id": "i24",
"text": "I think you teach your kids to be curious, to be independent, to be self-confident, you teach them how to think... those are going to be skills that are going to be important in any configuration of the future.",
"context": "Rather than predicting future jobs, focus on foundational thinking skills that adapt across scenarios.",
"topic_id": "topic_16",
"line_start": 443,
"line_end": 443
},
{
"id": "i25",
"text": "It kind of blows my mind that there is still... I'm kind of surprised that there isn't a 2 billion kid AI personalized tutoring thing because the models are good enough to do it now.",
"context": "Urgent opportunity gap: personalized tutoring is proven transformative yet remains largely unbuilt at scale.",
"topic_id": "topic_17",
"line_start": 449,
"line_end": 449
},
{
"id": "i26",
"text": "Every study out there that's ever been done seems to show that when you have... Like, education is still important, but when you combine that with personalized tutoring, you get multiple standard deviation improvements in learning speed.",
"context": "Evidence-based insight: personalized tutoring creates exceptional educational outcomes.",
"topic_id": "topic_17",
"line_start": 449,
"line_end": 449
},
{
"id": "i27",
"text": "Technology has driven a lot of the advancements that have made us the world and the society that we are today. It drives economic advancements, it drives geopolitical advancements, quality of life, longevity advancement.",
"context": "Technology optimism rooted in 200+ years of evidence; long-term benefits outweigh short-term disruption.",
"topic_id": "topic_18",
"line_start": 458,
"line_end": 458
},
{
"id": "i28",
"text": "It can't just be that the average is good. You've got to also think about how you take care of each individual person as best you can.",
"context": "Nuance on technology optimism: average gains don't excuse individual harms; must address dislocation.",
"topic_id": "topic_18",
"line_start": 461,
"line_end": 461
},
{
"id": "i29",
"text": "ChatGPT is also perhaps the best reskilling app you could possibly want. It knows a lot of things. It can teach you a lot of things if you're interested in learning new things.",
"context": "Practical tool for addressing job transition concerns; free, accessible reskilling resource.",
"topic_id": "topic_18",
"line_start": 464,
"line_end": 464
},
{
"id": "i30",
"text": "Give me ImageGen and I can think some creative thoughts and put something into the model and suddenly have output that I couldn't have possibly done myself. That's pretty cool.",
"context": "Personal example of AI democratizing creativity; non-artists can now produce visual work.",
"topic_id": "topic_19",
"line_start": 473,
"line_end": 473
},
{
"id": "i31",
"text": "You don't type into Sora like, 'Make me a great movie.' It requires creativity and ingenuity, and all these things, but it can help you explore more.",
"context": "AI as exploration tool, not replacement: human creativity and intent remain essential.",
"topic_id": "topic_19",
"line_start": 485,
"line_end": 485
},
{
"id": "i32",
"text": "Models are getting smarter, they're getting faster, they're getting cheaper, and they're getting safer too. They hallucinate less every iteration.",
"context": "Comprehensive improvement trajectory: multiple dimensions improving simultaneously at exponential pace.",
"topic_id": "topic_20",
"line_start": 512,
"line_end": 512
},
{
"id": "i33",
"text": "Sometimes it's not any one thing, it's just good work consistently over a long period of time.",
"context": "Mark Zuckerberg's insight Kevin lives by: consistency and compound gains outweigh silver bullets.",
"topic_id": "topic_23",
"line_start": 650,
"line_end": 650
},
{
"id": "i34",
"text": "People too often look for the silver bullet when a lot of life and a lot of excellence is actually showing up day in and day out, doing good work, getting a little bit better every single day.",
"context": "Reinforcement of consistency philosophy: small daily improvements compound dramatically over years.",
"topic_id": "topic_23",
"line_start": 653,
"line_end": 653
},
{
"id": "i35",
"text": "You can do effectively poor man's fine-tuning by including examples in your prompt of the kinds of things that you might want and a good answer.",
"context": "Practical prompting technique: in-context learning through examples approximates fine-tuning without formal training.",
"topic_id": "topic_24",
"line_start": 671,
"line_end": 671
},
{
"id": "i36",
"text": "You can also say things like, 'I want you to be Einstein. Now, answer this physics problem for me.' There is something where it sort of shifts the model into a certain mindset that can actually be really positive.",
"context": "Role-playing as prompting technique: setting context and identity improves model response quality.",
"topic_id": "topic_24",
"line_start": 680,
"line_end": 680
}
],
"examples": [
{
"id": "e1",
"explicit_text": "When we were building stories at Instagram... we could feel it was going to work because we were all using it internally and we'd go away for a weekend. Before it launched we were all using it and we'd come back after a weekend and we would know what was going on and be like, 'Oh, hey, I saw you were at that camping trip, how was that?' You were like, 'Man, this thing really works.'",
"inferred_identity": "Kevin Weil at Instagram",
"confidence": "high",
"tags": [
"Instagram",
"Stories",
"internal usage test",
"product validation",
"social features",
"launch success",
"viral product"
],
"lesson": "Internal team usage that excites users is a strong signal of product-market fit; if teams aren't naturally using a social feature extensively, reconsider the concept.",
"topic_id": "topic_2",
"line_start": 40,
"line_end": 41
},
{
"id": "e2",
"explicit_text": "Libra is probably the biggest disappointment of my career. It fundamentally disappoints me that this doesn't exist in the world today because the world would be a better place if we'd been able to ship that product.",
"inferred_identity": "Kevin Weil at Facebook",
"confidence": "high",
"tags": [
"Facebook",
"Libra",
"blockchain",
"cryptocurrency",
"remittances",
"WhatsApp",
"Messenger",
"regulation",
"failure",
"missed opportunity"
],
"lesson": "Shipping too much innovation at once during company reputation crisis, without incremental rollout, can sink an otherwise valuable project even with sound underlying vision.",
"topic_id": "topic_21",
"line_start": 14,
"line_end": 14
},
{
"id": "e3",
"explicit_text": "Deep research for people who haven't used it, you can give ChatGPT now an arbitrarily complex query... It's here's a thing that if you were going to answer it yourself, you'd go off and do two hours of reading on the web and then you might need to read some papers and then you would come back and start writing up your thoughts... You can let ChatGPT just like chug for you for 25, 30 minutes.",
"inferred_identity": "Kevin Weil at OpenAI",
"confidence": "high",
"tags": [
"OpenAI",
"ChatGPT",
"Deep Research",
"extended thinking",
"research automation",
"complex queries",
"knowledge synthesis"
],
"lesson": "For complex research tasks, allowing models extended time (25-30 minutes) with iterative refinement can replace human weeks of research; this requires building evals around the use case from the start.",
"topic_id": "topic_4",
"line_start": 159,
"line_end": 161
},
{
"id": "e4",
"explicit_text": "Khan Academy does great things. They're a wonderful partner of ours. Vinod Khosla has a non-profit that's doing some really interesting stuff in this space and is making an impact.",
"inferred_identity": "Khan Academy, Vinod Khosla's education non-profit",
"confidence": "high",
"tags": [
"education",
"tutoring",
"AI application",
"non-profit",
"access",
"learning outcomes"
],
"lesson": "Existing education platforms (Khan Academy) and non-profit efforts (Khosla's work) demonstrate demand but haven't scaled AI personalized tutoring to 2+ billion students despite models being ready.",
"topic_id": "topic_17",
"line_start": 449,
"line_end": 449
},
{
"id": "e5",
"explicit_text": "She vibe coded an internal tool that she had at a previous job that she really wanted to have here at Open AI... and she opened, I don't know, Windsurf or something, and vibe coded it.",
"inferred_identity": "Julia, OpenAI Chief People Officer",
"confidence": "high",
"tags": [
"OpenAI",
"internal tools",
"vibe coding",
"Windsurf",
"AI-assisted development",
"productivity"
],
"lesson": "Modern AI coding tools enable non-engineers or busy executives to rapidly prototype internal tools; if CPO can vibe code, broader teams should embrace this approach.",
"topic_id": "topic_13",
"line_start": 371,
"line_end": 371
},
{
"id": "e6",
"explicit_text": "I was talking to a director recently about Sora... he was saying, for a film that he's doing... you've got some scene where there's a plane zooming into some Death Star-like thing... In the world of two years ago, I would have paid a 3D effects company a hundred grand and they would've taken a month... Now, I can use Sora... and I can get 50 different variations of this cut scene.",
"inferred_identity": "Film director (unnamed, but directed known films like Star Wars level)",
"confidence": "medium",
"tags": [
"film production",
"visual effects",
"Sora",
"video generation",
"creative iteration",
"cost reduction",
"creative exploration"
],
"lesson": "Sora transforms film production workflows from expensive one-off VFX decisions to rapid iterative exploration, enabling directors to brainstorm 50 variations before committing to final production.",
"topic_id": "topic_19",
"line_start": 476,
"line_end": 482
},
{
"id": "e7",
"explicit_text": "When I was at Airbnb, one of the things that I loved most was our experimentation platform where I could set up experiments easily... Eppo does all that and more with advanced statistical methods.",
"inferred_identity": "Kevin Weil at Airbnb (referencing his time there)",
"confidence": "high",
"tags": [
"Airbnb",
"experimentation",
"A/B testing",
"product analytics",
"platform tools",
"growth"
],
"lesson": "Strong internal experimentation infrastructure (like Airbnb's) enables independent feature validation and learning; modern tools like Eppo democratize this capability.",
"topic_id": "topic_11",
"line_start": 23,
"line_end": 23
},
{
"id": "e8",
"explicit_text": "Aptos and Mistin are two companies that are built off of this tech... at least all of the work that we did, did not die and lives on in these two companies, and they're both doing really well.",
"inferred_identity": "Aptos, Movement (formerly Mistin) - blockchain companies built on open-sourced Libra tech",
"confidence": "high",
"tags": [
"Libra",
"blockchain",
"open source",
"Aptos",
"Movement",
"technology reuse",
"recovery"
],
"lesson": "Even when direct product fails, open-sourcing underlying technology creates secondary successes; Libra's blockchain foundation powered two successful blockchain platforms.",
"topic_id": "topic_21",
"line_start": 578,
"line_end": 578
},
{
"id": "e9",
"explicit_text": "We have 400 plus million weekly active users... we get a lot of inbound tickets. I don't know how many customer support folks we have, but it's not very many, 30, 40... and it's because we've automated a lot of our flows. We've got most questions using our internal resources, knowledge base, guidelines... you can teach the model those things.",
"inferred_identity": "OpenAI's customer support infrastructure",
"confidence": "high",
"tags": [
"OpenAI",
"customer support",
"automation",
"scaling",
"fine-tuned models",
"knowledge base",
"400M users"
],
"lesson": "At massive scale (400M+ users), custom fine-tuned models with company knowledge bases can handle majority of support with minimal human team (30-40 people vs typical ratios).",
"topic_id": "topic_15",
"line_start": 416,
"line_end": 416
},
{
"id": "e10",
"explicit_text": "Our naming is horrible... o3 mini high... We name things like o3 mini high. It's absolutely atrocious and we know it, and we will get around to fixing it at some point, but it's not the most important thing.",
"inferred_identity": "OpenAI's model naming strategy",
"confidence": "high",
"tags": [
"OpenAI",
"naming",
"product management",
"prioritization",
"product launches"
],
"lesson": "Bad naming (o3 mini high) doesn't prevent success of world-changing products (ChatGPT still fastest-growing); prioritize capability over naming perfection.",
"topic_id": "topic_7",
"line_start": 212,
"line_end": 206
},
{
"id": "e11",
"explicit_text": "Waymo... my first 10 seconds in a Waymo, you start driving... you're holding onto whatever you can. And then five minutes in, you've calmed down... And then another 10 minutes, you're bored, you're doing email on your phone, answering Slack messages, and suddenly this miracle of human invention is just an expected part of your life.",
"inferred_identity": "Waymo autonomous vehicle",
"confidence": "high",
"tags": [
"Waymo",
"autonomous driving",
"self-driving cars",
"user adaptation",
"technology normalization",
"product experience"
],
"lesson": "Miraculous technology gets taken for granted within 15 minutes; this mirrors how users will adapt to and expect new AI capabilities.",
"topic_id": "topic_1",
"line_start": 89,
"line_end": 89
},
{
"id": "e12",
"explicit_text": "GPT-3 launched... it was mind-blowing. And if I gave you GPT-3 now I just plugged that into ChatGPT for you... you'd be like, 'What is this thing?' It's like mess.",
"inferred_identity": "OpenAI's GPT-3 vs modern models",
"confidence": "high",
"tags": [
"OpenAI",
"GPT-3",
"model improvement",
"capability growth",
"user expectations",
"version history"
],
"lesson": "Model capabilities improve so rapidly that state-of-the-art from 2-3 years ago appears inadequate by current standards.",
"topic_id": "topic_1",
"line_start": 83,
"line_end": 83
},
{
"id": "e13",
"explicit_text": "At my previous company... I had at a previous job... she vibe coded... our chief people officer, Julia, was telling me the other day, she vibe coded an internal tool",
"inferred_identity": "Julia (OpenAI Chief People Officer), Kevin Weil's previous roles",
"confidence": "medium",
"tags": [
"OpenAI",
"internal tools",
"vibe coding",
"adoption",
"AI-assisted development",
"leadership"
],
"lesson": "When executive leadership (CPO level) adopts AI-assisted development workflows, it signals to organization this is expected and valued.",
"topic_id": "topic_13",
"line_start": 371,
"line_end": 371
},
{
"id": "e14",
"explicit_text": "We were designing evals at the same time as we were thinking about how this product was going to work... turning those into evals and then hill climbing on those evals... as we were fine-tuning our model for deep research... we were able to test is it getting better on these evals.",
"inferred_identity": "OpenAI's Deep Research product development",
"confidence": "high",
"tags": [
"OpenAI",
"Deep Research",
"evals",
"fine-tuning",
"product design",
"iterative improvement"
],
"lesson": "Build evals alongside product design from start, not after; use evals to measure model improvement and product viability in parallel.",
"topic_id": "topic_4",
"line_start": 161,
"line_end": 164
},
{
"id": "e15",
"explicit_text": "The remittance space, people sending money to family members in other countries... incredibly regressive... people that don't have the money to spend are having to pay 20% to send money home... 3 billion people using WhatsApp... especially friends and family... Why can't you send money as immediately, as cheaply, as simply as you send a text message?",
"inferred_identity": "Global remittance market problem that Libra aimed to solve",
"confidence": "high",
"tags": [
"remittances",
"financial inclusion",
"emerging markets",
"WhatsApp",
"payments",
"Libra",
"unmet need"
],
"lesson": "Clear unmet need (20% remittance fees, multi-day settlement) with massive addressable market (3B WhatsApp users) provides strong product motivation but requires stakeholder buy-in during reputation challenges.",
"topic_id": "topic_21",
"line_start": 551,
"line_end": 554
}
]
}