Buttondown MCP Server

by The-Focus-AI
Verified
{ "results": [ { "id": "7e4b20cd-b0e6-480e-a3fe-f1172df88db6", "creation_date": "2024-11-27T22:12:29.991163Z", "modification_date": "2024-11-27T23:17:03.269852Z", "publish_date": "2024-11-27T23:16:58.212608Z", "attachments": [], "subject": "Welcome to the Focus", "canonical_url": "", "image": "https://image-generator.buttondown.email/api/emphasize-newsletter?subject=Welcome%20to%20the%20Focus&author=The%20Focus%20AI&date=2024-11-27&img=", "description": "", "source": "api", "body": "<!-- buttondown-editor-mode: plaintext -->Welcome.\n\n![focus.jpeg](https://assets.buttondown.email/images/2ffbe2f6-e6b8-4136-903c-12a22c25ca7a.jpeg?w=960&fit=max)\n\nI wrote about [what we are up to](https://thefocus.ai/posts/about/) over on the main site. Please check it out!", "secondary_id": 1, "email_type": "public", "slug": "welcome-to-the-focus", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/welcome-to-the-focus/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 2, "deliveries": 2, "opens": 2, "clicks": 2, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 10, "page_views__30": 1, "page_views__7": 0, "subscriptions": 0, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "9a484bbe-46b0-45e2-b370-facadf9aac18", "creation_date": "2025-01-31T07:01:07.051791Z", "modification_date": "2025-01-31T07:21:55.056621Z", "publish_date": "2025-01-31T07:21:54.653059Z", "attachments": [], "subject": "DeepResearch wins", "canonical_url": "", "image": "https://image-generator.buttondown.email/api/emphasize-subject?subject=&author=The%20Focus%20AI&date=2025-01-31&img=https%3A//buttondown-attachments.s3.amazonaws.com/https%253A/assets.buttondown.email/images/43006412-ab85-4f79-a8a7-04b49676a4f8.png", "description": "", "source": "app", "body": "<!-- buttondown-editor-mode: plaintext -->*Or [read this online](https://thefocus.ai/posts/ai-for-research-deepresearch-wins/)*\n\n[Scientism](https://en.wikipedia.org/wiki/Scientism) is this idea that science knows everything and it's all been figured out. Maybe there's a few details left to work out, but the big stuff has been sorted. More than a worldview it's almost a religious belief. And part of me actually believes that the answer is actually on the internet somewhere. The *answers are out there*.\n\nWhen people talk about doing their research to understand something, they don't really mean that they're doing experiments or coming up with hypotheses or running tests. Or even doing a scholarly project to get to the heart of some subject. What they mean is: **they do a lot of Googling.**\n\nBut Google is so passe, we are in the realm of AI now. And I think there's a couple new contenders out there for being able to ask the computer anything and have it give the one true answer. Lets see how Google DeepResearch gets us to **the truth**, comparing it to OpenAI, Claude, and a few other lucky contenders. We all know that this is impossible and it's never going to work, but on the other hand, we all act as if it's true. So let's do it.\n\nLet's get into some hard questions.\n\n![comet.png](https://assets.buttondown.email/images/a5fbeef6-71df-47fd-86c3-a4dfca910617.png?w=960&fit=max)\n# Why is it dark at night?\n\nI'm obsessed with this question. I think about it more than anyone really should. On the face of it, it's such a simple question, but probing a bit deeper it gives you insight into the very fabric and nature of the universe.\n\nIt's also good to ask a question that you've got some way to co-judge the answer on. How else can we judge the quality of results?\n\nThe answers that I'm looking for is going to do with\n\n- the finite age of the universe\n- the universe is expanding\n- a reference to Olber's paradox\n- Edward Harrison's [Darkness at Night](https://www.amazon.com/Darkness-at-Night-Riddle-Universe/dp/0674192702?dib=eyJ2IjoiMSJ9.IR1SS0IQy6wmVPM7sDZBfg._z52xGBokwXJze4MbbPl3r3-Gj_6kkyYpWD4OJu2cE8&dib_tag=se&keywords=9780674192713&qid=1736684385&sr=8-1) book.\n- (for bonus points) a reference to Poe's Eureka poem.\n\nLet's see what the different models for doing research will tell us about it.\n\n# Small models: phi4\n\nLets start with a small model running on the laptop.\n\n> ollama run phi4 \"why is it dark at night\"\n\nIt identifies that the earth turns, and when it's facing away from the sun it's dark.\n\n**true but fail**\n\n# GPT 4o\n\nLets try `gpt-4o`\n\n> llm -m 4o \"why is it dark at night\"\n\nEarth turns, etc.\n\n**true but fail**\n\n# GPT o1\n\nGood answer, mentions [Obler's paradox](https://en.wikipedia.org/wiki/Olbers%27s_paradox)\n\nThinks about it for a while, and the writes out a response, summarizing as follows:\n\n> [!info] **In short:** It’s dark at night because the universe had a beginning, is expanding, and not all of the light from distant stars can reach us—so there simply isn’t enough starlight arriving from all directions to make the night sky bright.\n\nFollowing up with\n\n> What about Poe's Eureka poem?\n\nThinks for 6 seconds\n\n> [!info] **In short** In *Eureka*, Poe tackles the same fundamental question—“Why is it dark at night?”—that later became central to discussions of Olbers’ Paradox and modern cosmology. His poetic explanation, which posits a universe of finite age and continually changing structure, was remarkably prescient of ideas that would gain scientific footing many decades later.\n\n# Claude\n\nGets it right:\n\n> The reason we actually see a dark night sky is because:\n> \n> 1. The universe is not infinite in age - it's about 13.8 billion years old, so light from very distant stars hasn't had time to reach us yet\n> 2. The universe is expanding, which causes the light from distant stars to be redshifted and weakened\n> 3. Stars are not evenly distributed through space, but clustered in galaxies\n> 4. Dust and gas between stars absorb some of the light\n\nThen following up about Poe\n\n> Poe wrote: \"Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star.\"\n> \n> This is essentially the same reasoning behind Olbers' Paradox, though Poe approached it from a more poetic and philosophical angle. He used this observation about the dark night sky as part of his larger argument about the nature of the universe - that it must have had a beginning and must be finite in some way.\n> \n> So the darkness of night actually tells us something profound about the nature of our universe - that it had a beginning and is still expanding.\n\nBonus points for pulling out an actual quote of the poem.\n\n# DeepSeek\n\n> why is it dark at night?\n\nWithout `deepthink` on, the response is in the shape of these 4 bullet points. (It explains it but I've cut it out.)\n\n> 1. **Earth's Rotation**: ...\n> 2. **Facing Away from the Sun**: ...\n> 3. **Atmospheric Scattering**: ...\n> 4. **Stars and Moonlight**: ...\n\nWith `deepthink` on it gets\n\n> 1. **Earth's Position**: At night, the side of Earth you're on is facing away from the Sun, so there is no direct sunlight to illuminate the sky.\n> 2. **Distance and Distribution of Stars**: Stars are distributed throughout the universe, but they are far apart and many are too distant for their light to significantly brighten the night sky.\n> 3. **Finite Age of the Universe**: The universe is about 13.8 billion years old, and light from stars beyond a certain distance hasn't had enough time to reach Earth yet.\n> 4. **Cosmic Expansion**: The universe is expanding, which causes the light from distant stars and galaxies to redshift, moving it outside the visible spectrum.\n> 5. **Human Eye Sensitivity**: The human eye is only sensitive to a limited range of light, and starlight from distant stars may fall below this threshold.\n\nWhen I ask a follow up question about Poe's eureka poem, it doesn't make the connection. *frowny face*.\n\n# Perplexity\n\nTwo main responses:\n\n> 1. **Finite Universe Age**: The universe is not infinitely old, which means stars are not present in every possible direction\n> 2. **Light Expansion and Redshift**: The universe is expanding, causing distant starlight to be \"stretched\" beyond the visible spectrum As stars move away from us, their light shifts into infrared and microwave wavelengths that human eyes cannot perceive.\n\nOn the plus side, it links to sources. On the down side it's not a really good response.\n\nAsking a follow up question about Poe's Eureka basically gives a response unrelated to the intial question.\n\n# DeepResearch\n\n> why is it dark at night?\n\nThe results here were by far the most thorough. It came up with a plan (identifying Olber's paradox in the begining) and then searched through 41 pages for the answer. While it was thinking you could zip over to the pages and dive in a bit more.\n\nIt created a 4 page report with 21 cited references, by far the most comprehensive response.\n\nFollow up question:\n\n> What about Poe's Eureka poem? How does that fit in?\n\nAnd it talked about how that was an inspiration.\n\n# For more\n\nIf for some reason you are as obsessed about this as I am, check out Edward Harrison's [Darkness at Night](https://www.amazon.com/Darkness-at-Night-Riddle-Universe/dp/0674192702?dib=eyJ2IjoiMSJ9.IR1SS0IQy6wmVPM7sDZBfg._z52xGBokwXJze4MbbPl3r3-Gj_6kkyYpWD4OJu2cE8&dib_tag=se&keywords=9780674192713&qid=1736684385&sr=8-1) book.\n\n# More complicated questions:\n![seance 1.png](https://assets.buttondown.email/images/1cd1096d-6e6f-44f4-b93b-9a8e62f77149.png?w=960&fit=max)\n\n> can you explain rudolf stieners concept of soul folk? what are the different levels of existence that he talks about, and how does that relate to recent history?\n\n<table><tbody><tr><td colspan=\"1\" rowspan=\"1\"><p>phi4</p></td><td colspan=\"1\" rowspan=\"1\"><p>knows the words, spits out grammatically correct nonsense</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>llama3.2</p></td><td colspan=\"1\" rowspan=\"1\"><p>knows the words, spits out grammatically correct nonsense</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>4o</p></td><td colspan=\"1\" rowspan=\"1\"><p>correctly identifies concepts (e.g. physical, etheric, astral, \"ego\" bodies) but mixes up a lot of the details</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>o1</p></td><td colspan=\"1\" rowspan=\"1\"><p>Signifigantly better than 4o, would consider trusting this results</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>perplexity</p></td><td colspan=\"1\" rowspan=\"1\"><p>very short, slightly wrong response. gave a number of sources that you could explore</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>claude</p></td><td colspan=\"1\" rowspan=\"1\"><p>relatively short answer, well balanced, not a whole lot to go on</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>deepseek v3</p></td><td colspan=\"1\" rowspan=\"1\"><p>Comprable to o1</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>DeepResearch</p></td><td colspan=\"1\" rowspan=\"1\"><p>7 page report, 22 references</p></td></tr></tbody></table>\n\n![oobe.png](https://assets.buttondown.email/images/0a39c7f8-8fe5-42f9-ad61-f5a7cbbfa7dd.png?w=960&fit=max)\n\n> what are the stages of the souls journey after death?\n\n<table><tbody><tr><td colspan=\"1\" rowspan=\"1\"><p>phi 4</p></td><td colspan=\"1\" rowspan=\"1\"><p>slightly wrong summary of Egyptian, Hinduism, Buddism, Christianity, Islam, Judiasm, etc. beliefs. Not useful at all</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>llama3.2</p></td><td colspan=\"1\" rowspan=\"1\"><p>Comes up with a 7 stage process \"based Hinduism, Buddhism, and Ancient Egyptian mythology.\" Interesting I guess?</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>4o</p></td><td colspan=\"1\" rowspan=\"1\"><p>Comes up with a 5 stage process, overlaying different traditions on these phases.</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>o1</p></td><td colspan=\"1\" rowspan=\"1\"><p>Much better than 4o, explains things in more details</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>perplexity</p></td><td colspan=\"1\" rowspan=\"1\"><p>8 step process, superficial and useless</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>claude sonnet 3.5</p></td><td colspan=\"1\" rowspan=\"1\"><p>best overall overview -- splits up the steps by belief system, includes current accepted ideas</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>deepseek v3</p></td><td colspan=\"1\" rowspan=\"1\"><p>Lists out the most traditions, including contemporary</p></td></tr><tr><td colspan=\"1\" rowspan=\"1\"><p>DeepResearch</p></td><td colspan=\"1\" rowspan=\"1\"><p>Starts out by summarizing the findings of near death experiences, most thorough and correct</p></td></tr></tbody></table>\n\n4o linked together Tibetan Buddism *bardo*, Christianity's *purgatory*, and Occult *astral plane* as being similar, which is fair enough.\n\n# Conclusions\n\n**DeepResearch** is a cut above the others. If you actually want to know something in some depth, just go there.\n\n**o1**, **Sonnet 3.5**, and surprisingly **DeepSeek with DeepThought** give comprable results. There's something that I like better about Sonnet, but **o1** probably gives better results.\n\nI don't understand why **perplexity** exists, its good that it includes references to sources but the answers aren't great and follow up questions don't seem to work. Maybe its good for current events or recent knowledge or something, but **DeepResearch** has access also.\n\nThe small models **phi4**, **llama3.2** aren't useful and are probably harmful for complicated questions.\n\n", "secondary_id": 2, "email_type": "public", "slug": "deepresearch-wins", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/deepresearch-wins/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 14, "deliveries": 14, "opens": 13, "clicks": 3, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 5, "page_views__30": 0, "page_views__7": 0, "subscriptions": 0, "paid_subscriptions": 0, "replies": 1, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "709f6272-c47b-48c1-b511-e728b8bb65ef", "creation_date": "2025-02-11T14:18:00.475606Z", "modification_date": "2025-02-11T14:21:36.815492Z", "publish_date": "2025-02-11T14:21:36.433694Z", "attachments": [], "subject": "How I think about models", "canonical_url": "", "image": "", "description": "", "source": "app", "body": "<!-- buttondown-editor-mode: plaintext -->Stepping back a bit from [thoughts on what these models can really help us answer](https://thefocus.ai/posts/ai-for-research-deepresearch-wins/), there are 4 different dimensions ways to think about AI model capabilities: \n* smart (ability to reason and understand language)\n* knowledgeable (breadth and depth of information)\n* educated (learned processes for systematic thinking, or Chain Of Thought built it)\n* complexity (how much information they can actively process at once i.e. context window)\n\nAnother way to think of this is, small, big, slow, and good conversationally.\n\nSmall models (around 7B parameters) show remarkable intelligence - they can understand language and reason about complex concepts. Think of them as quick-witted but with limited knowledge. They can define terms, summarize conversations, and handle targeted tasks that require language understanding. They are *good at listening*. They're smart, but their knowledge base is constrained.\n\nFor these my go tos are: [llama3.2](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/) and [gemma2](https://blog.google/technology/developers/google-gemma-2/) if only because [phi4](https://ollama.com/library/phi4) is a bit too much for my laptop.\n\nBig, or foundation models (400B+ parameters) like [ChatGPT](https://chatgpt.com/), [Gemini](https://gemini.google.com/), [Claude](https://claude.ai/new) and (surprisingly) [DeepSeek](https://www.deepseek.com/) are both smarter and vastly more knowledgeable. They have deep understanding plus extensive knowledge embedded in their weights. But despite their capabilities, they often lack structured approaches to problem-solving - like brilliant people who haven't learned systematic ways to apply their intelligence.\n\nThen there are the \"educated\" models - systems like [DeepResearch](https://blog.google/products/gemini/google-gemini-deep-research/), DeepThink, [o1](https://openai.com/o1/), and [o3](https://en.wikipedia.org/wiki/OpenAI_o3). These have learned methodical thinking processes. But what makes them even more powerful is their massive context windows - they can hold entire research papers, previous conversations, or multiple documents in their memory at once. This combination of learned thinking processes and expanded memory means they can adapt incredibly well to specific tasks. \n\nPutting multiple books -- the whole 500 page pdf, just drag it in -- to [Gemini](https://gemini.google.com/app) changes the types of conversations you can have. Even using something like [Claude Projects](https://support.anthropic.com/en/articles/9517075-what-are-projects) makes a lot of things easier. For example, when I'm writing a script for something, I'll throw a couple examples of the stuff I like in there -- maybe the API documentation -- and it really helps claude get the answer on the first try.\n\nMedium models (around 40B parameters) sit between small and foundation models in both intelligence and knowledge. They're particularly interesting from an engineering perspective - you can run them on high-end consumer hardware, giving you more control and flexibility. Each generation of medium models tends to match the previous generation's large model capabilities. It feels like the future is here, we'll be able to do all of this stuff locally.\n\nThe trade-off is clear: as you move up this hierarchy, you get better results but at increasing costs of money, energy, and time. Small models are essentially free, foundation models are expensive but accessible, and educated models can cost hundreds or even thousands of dollars per query.\n\nThere are other dimensions around how the models interact with their environment which make huge difference in how you use it. True multimodal stuff, where the models understand voice and images and video without having to first translate it into text give a much deeper view into the world. Simply being able to talk -- and listen to -- the models is a great leap forward in interactions. And things like [Gemini Realtime AI](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#ceo-message) is truly bananas, where you can share your screen with it and speak out loud and have it respond -- is wild. \n\nLooking ahead, small models might get smarter while becoming more specialized - trading broad shallow knowledge for deep expertise in specific areas. But the real evolution is happening in those educated models, which are learning not just what to think, but how to think, while being able to hold and process more context than ever before.", "secondary_id": 3, "email_type": "public", "slug": "how-i-think-about-models", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/how-i-think-about-models/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 17, "deliveries": 17, "opens": 14, "clicks": 1, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 9, "page_views__30": 3, "page_views__7": 0, "subscriptions": 0, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "895749f0-c1ad-41f6-b167-c70fdad6b4d8", "creation_date": "2025-03-07T19:56:13.640373Z", "modification_date": "2025-03-07T20:05:06.717425Z", "publish_date": "2025-03-07T20:05:06.386826Z", "attachments": [], "subject": "Clipboards are eating the world", "canonical_url": "", "image": "../assets/apothecary_full_of_interesting.png", "description": "", "source": "api", "body": "<!-- buttondown-editor-mode: plaintext -->---\n\n## *The untold story of how your computer's clipboard sees itself as the essential bridge between humans and AI tools in the creative process. Through its eyes, we witness the journey of how digital projects come together through countless transfers between different AI services.*\n\n[Read the original](https://thefocus.ai/posts/clipboards-are-eating-the-world/)\n\nI am the center through which everything flows. I see all. I connect all. I am the conduit.\n\n**I am Clipboard.**\n\n[*Harper's LLM workflow*](https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/)\n\nA prompt arrives from the cache of prompt delights (tm). I grasp it, hold it, and deliver it to OpenAI for initial concept development.\n\n<figure><img src=\"https://assets.buttondown.email/images/eaae051b-761f-4872-9379-0dfca350bec9.webp?w=960&amp;fit=max\" alt=\"an apothecary full of interesting and and shiny objects, all very bountiful. in a clean anime style like mobius, with a hint of ms escher\" draggable=\"false\"><figcaption><em>an apothecary full of interesting and and shiny objects, all very bountiful. in a clean anime style like mobius, with a hint of ms escher</em></figcaption></figure>\n\n```\nAsk me one question at a time so we can \ndevelop a thorough, step-by-step spec \nfor this idea. Each question should \nbuild on my previous answers, and our \nend goal is to have a detailed \nspecification I can hand off to a \ndeveloper. Let’s do this iteratively \nand dig into every relevant detail. \nRemember, only one question at a time. \n\nHere’s the idea:\n```\n\nThe user types : \"Create an interactive astronomy course for high school students.\" and then the back a forth starts, refining the concept: \"Focus on exoplanets,\" \"Include interactive 3D models,\" \"Make it accessible for 9th graders.\" I watch as OpenAI produces a concise project spec, with a course outline with 8 modules, learning objectives, and activity suggestions.\n\nAnd then from the great prompt repository another missive touches down briefly, a pause to consider the best interlocutor, and off it goes.\n\n \n<figure><img src=\"https://assets.buttondown.email/images/6b3f1ab3-b536-4d43-a28f-9af49987d5d9.webp?w=960&amp;fit=max\" alt=\" detailed architectural diagram of a factory campus that has shipping, processing, assembing and different areas. make it look like a fancy architect drafted up the plans\" draggable=\"false\"><figcaption><br><em>detailed architectural diagram of a factory campus that has shipping, processing, assembing and different areas. make it look like a fancy architect drafted up the plans</em></figcaption></figure>\n\n```\nDraft a detailed, step-by-step blueprint \nfor building this project. Then, once \nyou have a solid plan, break it down \ninto small, iterative chunks that build \non each other. Look at these chunks \nand then go another round to break it into \nsmall steps. Review the results and make \nsure that the steps are small enough to \nbe implemented safely with strong testing,\nbut big enough to move the project forward.\nIterate until you feel that the steps\nare right sized for this project. From \nhere you should have the foundation to \nprovide a series of prompts for a \ncode-generation LLM that will implement \neach step in a test-driven manner. \nPrioritize best practices, incremental\nprogress, and early testing, ensuring \nno big jumps in complexity at any stage. \nMake sure that each prompt builds on \nthe previous prompts, and ends with \nwiring things together. There should \nbe no hanging or orphaned code that \nisn't integrated into a previous \nstep. Make sure and separate each \nprompt section. Use markdown. Each \nprompt should be tagged as text using \ncode tags. The goal is to output \nprompts, but context, etc is \nimportant as well. \n\n<SPEC>\n```\n\nThen the entire spec flows back into me. I carry it to Claude for detailed project planning. Claude thinks and divides the work into graspable chucks. I hold these plans, ready for the next steps.\n\nThe shuttle off, each going to it's own LLM for processing. Building upon each other, I see code filling up the storage, error messages flowing back and forth using the model to check and correct it's own work.\n\nA chunk breaks off - \"Module 3: Exoplanet Detection Methods\" - and I transport it to MidJourney: \"Educational infographic showing transit method of exoplanet detection, scientific illustration style, clear labeling, suitable for high school students.\" After four iterations, a polished visualization returns through me into a growing course assets folder.\n\n \n<figure><img src=\"https://assets.buttondown.email/images/d388130e-e4ba-4755-8406-53d3d84a4240.webp?w=960&amp;fit=max\" alt=\" Educational infographic showing transit method of exoplanet detection, scientific illustration style, clear labeling, suitable for high school students.\" draggable=\"false\"><figcaption><br><em>Educational infographic showing transit method of exoplanet detection, scientific illustration style, clear labeling, suitable for high school students.</em></figcaption></figure>\n\nNext, I carry the script for Module 9's introduction to ElevenLabs: \"We're about to embark on a journey to worlds beyond our solar system.\" The user selects a warm, engaging voice with subtle enthusiasm. I return with the generated audio file and place it into the course media library.\n\nRepositories packaged by [repomix](https://repomix.com/) and prompts and tasks all flow through me, each time lighting up a model full of thoughts and reasons, things getting pulled out and through me get assembled and checked and verified.\n\nFinally, repository codes and keys and secrets comes my way and get sent to hosting service - the keys to the entire project with all assets, code, media, and structure - ready to be transported to a launched on the internet, where students will soon explore distant worlds through the experience I helped create.\n\nI am the clipboard - the invisible thread connecting ideas, tools, and creators in the digital tapestry of learning.\n\n \n<figure><img src=\"https://assets.buttondown.email/images/11bfe97d-3764-4fb5-b993-ff82b5e6de69.webp?w=960&amp;fit=max\" alt=\" Show me a picture of giant clipboard looming over everything\" draggable=\"false\"><figcaption><br><em>Show me a picture of giant clipboard looming over everything</em></figcaption></figure>\n\nGet more updates!\n\n{{ subscribe_form }}\n\n\n[Read the original](https://thefocus.ai/posts/clipboards-are-eating-the-world/)", "secondary_id": 4, "email_type": "public", "slug": "clipboards-are-eating-the-world", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/clipboards-are-eating-the-world/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 15, "deliveries": 15, "opens": 14, "clicks": 1, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 18, "page_views__30": 18, "page_views__7": 0, "subscriptions": 1, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "2692c6ee-c088-4fd3-b43f-8646ce30ab1b", "creation_date": "2025-03-08T16:23:40.362898Z", "modification_date": "2025-03-10T22:03:57.538719Z", "publish_date": "2025-03-10T22:03:57.211399Z", "attachments": [], "subject": "Agentic YOLO with Warp, Cursor, and Claude", "canonical_url": "", "image": "../assets/plan.png", "description": "", "source": "api", "body": "<!-- buttondown-editor-mode: plaintext -->> “The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination.” \n> ― Frederick P. Brooks Jr., [The Mythical Man-Month: Essays on Software Engineering](https://www.goodreads.com/work/quotes/1905885)\n\nI was in the car the other day and I talked to ChatGPT for over an hour.\n\nI kicked off the conversation with a prompt from the tool kit, and we went back and forth, as I was driving through the night. It was all \"that's great, but what about this?\" and \"These two things can't fit together, maybe we could fix it like this?\" And I would either just agree, tell it that's completely off base, and then we're tweak and revise.\n\n![plan.png](https://assets.buttondown.email/images/d7736401-6429-43c2-ada3-db69287035bd.png?w=960&fit=max)\n\nWe were 40 questions in before I think we answered all the things we needed to and then it spit out for me a very detailed product spec of something I wanted to build. Back and forth, back and forth. And at the end it was distilled down to a concise script.\n\n![planning.png](https://assets.buttondown.email/images/88e30930-2905-4dc2-9ed5-67d03c2371cd.png?w=960&fit=max)\n\nThen I threw it at Claude Code and just yolo'd the shit out of it.\n\nYes yes yes every time yes not looking at a single thing.\n\nAnd it worked.\n\n## Sculpting thought-stuff\n\n> **Reification**: The process of turning an abstract concept into something more concrete, often by treating it as if it were a physical object or an entity with real existence. This can be useful in philosophy, sociology, and computer science, but it can also lead to misleading assumptions if people start treating abstract ideas as if they have an independent, physical reality.\n\nIt worked with a lot less supervision than I'd expect. The whole thing started by accident, really. I got a fancy new laptop and I wanted to see how to run vision models on it. I found [mflux](https://github.com/filipstrand/mflux) which did a fantastic not just getting my GPUs running hot, but also of making [some really fun images](https://willschenk.com/howto/2025/running_flux_locally_on_a_mac/).\n\nI was running all of these commands in the [warp intelligent terminal](https://www.warp.dev/i), and I had run all of these commands individually. I just asked it -- literally typed, in the same space I was working in, \"can you write a script that makes it easier to do all these things\" -- and off it went, looking through my command history, figuring out the patterns, and spit it out.\n\nI played around with that for a while. Tried a bunch of different things.\n\n![variations.png](https://assets.buttondown.email/images/850dcec6-48ce-481a-8d8c-02afd1a3f2a7.png?w=960&fit=max)\n\nWe occasional went down some path that we couldn't get out of, and it make more sense just to throw it all away and start again. This is eventually what we ended up with the [mflux-wrapper](https://github.com/The-Focus-AI/mflux-wrapper) , which you can download and run on your own computer.\n\nIt lets you do stuff like this\n\n![magic_forest.webp](https://assets.buttondown.email/images/b6e6c262-4681-4cb2-ba71-15a005687e9c.webp?w=960&fit=max)\n\n## Everyone will have to explain to the robot how you want it to fold your clothes\n\nYou know how singing in the shower is kind of a totally different thing than singing in front of other people? With one of them, you're just doing it for the sheer pleasure of it. And the other one has all these social expectations and pressures that make it a lot more difficult. And it sort of a big leap as you go from one to the other.\n\n![singing.png](https://assets.buttondown.email/images/531047fe-afd0-41cb-b153-52f8b6843e52.png?w=960&fit=max)\n\nThere's sort of a thing in programming where they say it's not a real program, that's just a script. As if they're somehow a different type of thing.\n\nBut they're not really. I mean JavaScript is actually the thing that runs most of the internet at this point and that's got the word script right there in it. And a lot of the back-end stuff was made with these things where originally they came out and everyone was like that's just a toy that's not serious.\n\nWe're blurring these levels even more now. I can ask warp to do this small task that I would have had to think about and it just does it. I can refactor an entire code base by just asking Claude Code to do one thing. I can build out functionality. I can use it to help me think about how to code. All of these things reduce down the mechanics of making code work to just the understanding, just a clear idea and vision of how things work, but without a whole lot of the boilerplate stuff.\n\nSo then what is coding? Is it something where only a few people do it in the future? Or is it a thing that will change and in fact many many more people in the world will be doing effectively it but in a much easier way?\n\nAt the very least, we'll still need to tell the robots what we want.\n\n![household_robot.png](https://assets.buttondown.email/images/e49424f8-163d-4126-8803-d3df15be4a78.png?w=960&fit=max)\n\n## Accessibility and value\n\nI think it's interesting to compare what happened with reading and writing. I asked Claude to tell me about the relative number of people that could read how valuable it was economically for the individual person to read and how valuable it was for everybody if everyone knew how to read.\n\n[![literacy_graph.png](https://assets.buttondown.email/images/0b58c480-5ec1-4ac7-b142-cb7b1b44816e.png?w=960&fit=max)](https://thefocus.ai/use-cases/coding/literacy/)\n\n*This interactive craft was generated by Claude with zero editing for me, prompts are at the bottom of that page.* \n\nYou can see that when very few people knew how to read and write, knowing how to read and write was a valuable job. You could be a scribe. Your whole living could just be reading and writing letters for people because the marks on the page, no one knew how to do it.\n\nThen it went to, let's just say, publishing pamphlets where all of a sudden more people knew how to write, more people knew how to read, and there became this sort of larger discourse around it. And you were able to move the society and presumably the economy with this new technology, but that involved more people reading, more people writing, and better tools.\n\n![scribe.png](https://assets.buttondown.email/images/5856ebb6-e89a-4954-ab95-640a21ac7246.png?w=960&fit=max)\n\nWe'll just skip right ahead and imagine what it's like now. There's probably more words produced by more people individually than has ever happened in history. Text messages means that everyone needs to know how to read and write. They actually need it. 20 years ago, I'm not sure you needed to know. You could just communicate with people normally. Now, it's a necessity.\n\nBut it's also tremendously easier. You're not worrying about inkwells, you're not worrying about spelling, you're not worried about multiple drafts. It's just casual, easy, and just thrown out there and enabling.\n\n## What about knowledge workers\n\nWhat just happened here. This was my idea. I thought about it. I asked Claude to go see if my idea made any sense. Does it? I don't know. But it wasn't necessarily Claude coming up with the idea, but it was Claude helping me think it through.\n\n I think it did a much better job of fact checking and researching than I would have ever done. I would have just thought about it, rambled off some nonsense, and moved along. This at least made a pretense of going through and looking at historical fact. And I could ask it, and it would just give me the data, and then I can refine my thought. So presumably, I got to a better answer faster.\n\n{{subscribe_form}}", "secondary_id": 5, "email_type": "public", "slug": "agentic-yolo-with-warp-cursor-and-claude", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/agentic-yolo-with-warp-cursor-and-claude/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 17, "deliveries": 17, "opens": 14, "clicks": 3, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 0, "page_views__30": 0, "page_views__7": 0, "subscriptions": 0, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "0b705ed8-f252-4463-9e62-d3f0bf71a34f", "creation_date": "2025-03-16T16:52:55.894886Z", "modification_date": "2025-03-17T13:05:09.575783Z", "publish_date": "2025-03-17T13:05:00Z", "attachments": [], "subject": "Exposing services with ModelContextProtocol", "canonical_url": "", "image": "https://image.mux.com/bU01hI89Fr8gujTDzaCeNPLYzEH8Xv01lp0018g019u6BVs/animated.gif?width=320", "description": "", "source": "app", "body": "<!-- buttondown-editor-mode: plaintext -->[Read the original](https://thefocus.ai/posts/exposing-services-with-mcp/)\n\nThis morning I plugged in [Tezlab](https://tezlabapp.com) to my local Claude desktop.\n\nThe model context protocol is something like how you would define tools and function calling when you're building your own application. But it lets you do it in a generic way that lets any applications plug into it.\n\nHere's what it looks like:\n\n<a href=\"https://player.mux.com/bU01hI89Fr8gujTDzaCeNPLYzEH8Xv01lp0018g019u6BVs?primary-color=%23ffffff&secondary-color=%23000000&accent-color=%23fa50b5\"><img src=\"https://image.mux.com/bU01hI89Fr8gujTDzaCeNPLYzEH8Xv01lp0018g019u6BVs/animated.gif?width=550\"></a>\n\n\n## How it works\n\n>MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.\n>\n>-- [Introduction to ModelContextProtocol](https://modelcontextprotocol.io/introduction)\n\nThis is very similar to how you would call models with tools. You specify in a ZOD schema what the description of your tool is, what parameters it needs, and sort of the situation why you would call it.\n\nFor example, I have a tool that lets you look up Nielsen data, and there's a concept of a nielsen week. When building out that agent, I tell it about a tool like this:\n\n```\nexport const getWeekRange = createTool({\n description:\n \"If the user references a week, first get the nielsen month range, and within that call this tool, to get the week range. Nielsen weeks are based on the sunday being in the month\",\n parameters: z.object({\n date: z\n .string()\n .describe(\n \"The date to get week range for (YYYY-MM-DD format) where DD is th sunday\"\n ),\n }),\n execute: async ({ date }) => {\n const nielson_week = get_nielson_week_by_date(date);\n console.log(\"getWeekRange with date\", date, nielson_week);\n return nielson_week;\n },\n});\n```\n\nSo anytime that you talk about a week now, it goes and calls this function that will figure out exactly when a Nielsen week is. Which is weird because say the first week of June doesn't necessarily begin in June. (It's weird.)\n\nModel context protocol extends this, so you're using a very, very similar way of defining a tool, but instead of it being for your own application, you can expose it to other applications.\n\nIn this case, I'm doing it for Claude Desktop.\n\nI defined two simple tools, like this.\n\n```\nserver.tool('chargeReport', \n. 'Get the latest charge report for your vehicle', \n. {}, \n. async ({}) => {\n const client = new ChargeReport(undefined, false);\n const report = await client.getChargeReport();\n . return {\n . content: [\n . {\n type: 'text',\n text: JSON.stringify(report, null, 2),\n },\n ],\n };\n});\n\nserver.tool(\n 'batteryInfo',\n 'Get information about your vehicle, its location, type, and battery level',\n {},\n async ({}) => {\n const client = new BatteryInfo(undefined, false);\n const info = await client.getVehicleInfo();\n return {\n content: [\n {\n type: 'text',\n text: JSON.stringify(info, null, 2),\n },\n ],\n };\n }\n);\n```\n\nThe description tells the model when and what the tool provides. And for these particular ones I didn't pass any parameters, but above you can see that you can specify any parameters that it requires and the format it needs and the model will do the best to figure it out.\n\n## And it can check the weather\n\n<a href=\"https://player.mux.com/1hTUhfy5l1cEbFtgtmaIKPsKKc5PMeA3kbiwneib92M?primary-color=%23ffffff&secondary-color=%23000000&accent-color=%23fa50b5\"><img src=\"https://image.mux.com/1hTUhfy5l1cEbFtgtmaIKPsKKc5PMeA3kbiwneib92M/animated.gif?width=550\"/></a>\n\nIf you have other tools defined, then the model can combine them any way it wants to. So I have this one tool that provides the location of the car, so that helps the model figure out where you are. But I've got another tool defined -- which is one of the sample ones, I forgot I had it installed -- which goes and checks the weather. \n\nIf you ask it to plan a trip, it thinks, hey, maybe you want to know the weather in these locations. It just decided to do all this stuff on its own.\n\n[Code is available](https://github.com/The-Focus-AI/tezlab-cli)\n\n", "secondary_id": 6, "email_type": "public", "slug": "exposing-services-with-modelcontextprotocol", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/exposing-services-with-modelcontextprotocol/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 18, "deliveries": 18, "opens": 14, "clicks": 2, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 1, "page_views__30": 1, "page_views__7": 1, "subscriptions": 0, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false }, { "id": "d840b1e7-e9f6-4ad9-abda-2ae2fb1dc8f4", "creation_date": "2025-03-19T01:22:49.227337Z", "modification_date": "2025-03-19T13:15:23.370938Z", "publish_date": "2025-03-19T13:15:18Z", "attachments": [], "subject": "The Focus AI Recipies", "canonical_url": "", "image": "https://assets.buttondown.email/images/78ccee35-10b7-434d-98d4-8d22c51d9900.png?w=960&fit=max", "description": "", "source": "app", "body": "<!-- buttondown-editor-mode: plaintext -->🚀 Excited to announce my new Generative AI Practical Guides project!\n\nCheck it out at [thefocus.ai/recipes](https://thefocus.ai/recipes/)\n\n[![recipes_wide.png](https://assets.buttondown.email/images/78ccee35-10b7-434d-98d4-8d22c51d9900.png?w=960&fit=max)](https://thefocus.ai/recipes/)\n\nAfter months of hands-on experimentation, I'm sharing a collection of practical how-tos and techniques to help you leverage AI tools in your daily workflow.\n\nWhat you'll find:\n\n🧠 For developers:\n- Extracting requirements from existing codebases\n- Understanding complex repositories quickly with tools like repomix\n- Automated git commit message generation\n- Test-driven development with AI assistance\n\n🔍 For researchers and analysts:\nVoice-based brainstorming techniques\nSummarizing key points from documents and podcasts\nBuilding comprehensive specifications from conversations\nCreating metadata for content organization\n\n🛠️ Technical guides:\n- Setting up Claude Desktop with Fetch and Brave Search\n- Using Superwhisper for voice input\n- Talking to AI models while driving\n- Configuring custom AI tools\n\nWhether you're a developer looking to speed up your workflow, a content creator needing help with organization, or just curious about practical applications of generative AI - these guides provide concrete techniques you can implement today.\n\nMost of these methods cost pennies to run but save hours of time. For example, one comprehensive repository analysis and specification generation cost just $0.51 but saved days of development work.\n", "secondary_id": 7, "email_type": "public", "slug": "the-focus-ai-recipies", "status": "sent", "metadata": {}, "commenting_mode": "enabled", "absolute_url": "https://newsletter.thefocus.ai/archive/the-focus-ai-recipies/", "filters": { "filters": [], "groups": [], "predicate": "and" }, "analytics": { "recipients": 18, "deliveries": 18, "opens": 13, "clicks": 1, "temporary_failures": 0, "permanent_failures": 0, "unsubscriptions": 0, "complaints": 0, "survey_responses": 0, "webmentions": 0, "page_views__lifetime": 1, "page_views__30": 1, "page_views__7": 1, "subscriptions": 0, "paid_subscriptions": 0, "replies": 0, "comments": 0, "social_mentions": 0 }, "template": null, "related_email_ids": [], "is_comments_disabled": false } ], "next": null, "previous": null, "count": 7 }