10 interesting stories served every morning and every evening.
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on mastodon.social »
Here we go again, the tech press is having another AI doom cycle.
I’ve primarily written this as a response to an NYT analyst painting a completely unsubstantiated, baseless, speculative, outrageous, EGREGIOUS, preposterous “grim picture” on OpenAI going bust.
Mate come on. OpenAI is not dying, they’re not running out of money. Yes, they’re creating possibly the craziest circular economy and defying every economics law since Adam Smith published ‘The Wealth of Nations’. $1T in commitments is genuinely insane. But I doubt they’re looking to be acquired; honestly by who? you don’t raise $40 BILLION at $260 BILLION VALUATION to get acquired. It’s all for the $1T IPO.
But it seems that the pinnacle of human intelligence: the greatest, smartest, brightest minds have all come together to… build us another ad engine. What happened to superintelligence and AGI?
See if OpenAI was not a direct threat to the current ad giants would Google be advertising Gemini every chance they get? Don’t forget they’re also capitalising on their brand new high-intent ad funnel by launching ads on Gemini and AI overview.
March: Closed $40B funding round at $260B valuation, the largest raise by a private tech company on record.
July: First $1B revenue month, doubled from $500M monthly in January.
January 2026: “Both our Weekly Active User (WAU) and Daily Active User (DAU) figures continue to produce all-time-highs (Jan 14 was the highest, Jan 13 was the second highest, etc.)”
January 16, 2026: Announced ads in ChatGPT free and Go tiers.
Yes, OpenAI is burning $8-12B in 2025. Compute infrastructure is obviously not cheap when serving 190M people daily.
So let’s try to model their expected ARPU (annual revenue per user) by understanding what OpenAI is actually building and how it compares to existing ad platforms.
The ad products they’ve confirmed thus far:
* Ads at bottom of answers when there’s a relevant sponsored product or service based on your current conversation
Testing starts “in the coming weeks” for logged-in adults in the U. S. on free and Go tiers. Ads will be “clearly labeled and separated from the organic answer.” Users can learn why they’re seeing an ad or dismiss it.
* Choice and control: Users can turn off personalization and clear ad data
* Plus, Pro, Business, and Enterprise tiers won’t have ads
They also mentioned a possibility of conversational ads where you can ask follow-up questions about products directly.
Revenue targets: Reports suggest OpenAI is targeting $1B in ad revenue for 2026, scaling to $25B by 2029, though OpenAI hasn’t confirmed these numbers publicly. We can use these as the conservative benchmark, but knowing the sheer product talent at OpenAI, the funding and hunger. I think they’re blow past this.
* Self-serve platform: Advertisers bid for placements, super super super likely, exactly what Google does, probably their biggest revenue stream.
* Affiliate commissions: Built-in checkouts so users can buy products inside ChatGPT, OpenAI takes commission, similar to their Shopify collab.
* Sidebar sponsored content: When users ask about topics with market potential, sponsored info appears in a sidebar marked “Sponsored”
Now let’s compare this to existing ad platforms:
* How it works: Auction-based system where advertisers bid on keywords. Ads appear in search results based on bid + quality score.
* Why it works: High intent (search queries) + owns the entire vertical stack (ad tech, auction system, targeting, decades of optimization)
* Ad revenue: [$212.4B in ad revenue in the first 3 quarters of 2025]https://www.demandsage.com/google-ads-statistics/ (8.4% growth from 2024′s $273.4B)
* Google doesn’t report ARPU so we need to calculate it: ARPU = $296.2B (projected) ÷ 5.01B = $59.12 per user annually.
* How it works: Auction-based promoted tweets in timeline. Advertisers only pay when users complete actions (click, follow, engage).
* Why it works: Timeline engagement, CPC ~$0.18, but doesn’t own vertical stack and does it on a smaller scale
* Intent level: High. 2.5B prompts daily includes product research, recommendations, comparisons. More intent than Meta’s passive scrolling, comparable to Google search.
* Scale: 1B WAU by Feb 2026, but free users only (~950M at 95% free tier).
So where should ChatGPT’s ARPU sit?
It sits with Search, not Social.
Which puts it between X ($5.54) and Meta ($49.63). OpenAI has better intent than Meta but worse infrastructure. They have more scale than X but no vertical integration. When a user asks ChatGPT “Help me plan a 5-day trip to Kyoto” or “Best CRM for small business,” that is High Intent. That is a Google-level query, not a Facebook-level scroll.
We already have a benchmark for this: Perplexity.
In late 2024/2025, reports confirmed Perplexity was charging CPMs exceeding $50. This is comparable to premium video or high-end search, and miles above the ~$2-6 CPMs seen on social feeds.
If Perplexity can command $50+ CPMs with a smaller user base, OpenAI’s “High Agency” product team will likely floor their pricing there.
* 2026: $5.50 (The “Perplexity Floor”) - Even with a clumsy beta and low fill rate, high-intent queries command premium pricing. If they serve just one ad every 20 queries at a Perplexity-level CPM, they hit this number effortlessly.
* 2027: $18.00 - The launch of a self-serve ad manager (like Meta/Google) allows millions of SMBs to bid. Competition drives price.
* 2028: $30.00 - This is where “Ads” become “Actions.” OpenAI won’t just show an ad for a flight; they will book it. Taking a cut of the transaction (CPA model) yields 10x the revenue of showing a banner.
* 2029: $50.00 (Suuuuuuuper bullish case) - Approaching Google’s ~$60 ARPU. By now, the infrastructure is mature, and “Conversational Commerce” is the standard. This is what Softbank is praying will happen.
And we’re forgetting that OpenAI have a serious serious product team, I don’t doubt for once they’ll be fully capable of building out the stack and integrating ads til they occupy your entire subconscious.
In fact they hired Fidji Simo as their “CEO of Applications”, a newly created role that puts her in charge of their entire revenue engine. Fidji is a Meta powerhouse who spent a decade at Facebook working on the Facebook App and… ads:
Leading Monetization of the Facebook App, with a focus on mobile advertising that represents the vast majority of Facebook’s revenue. Launched new ad products such as Video Ads, Lead Ads, Instant Experiences, Carousel ads, etc.
Launched and grew video advertising to be a large portion of Facebook’s revenue.
But 1.5-1.8B free users by 2028? That assumes zero competition impact from anyone, certainly not the looming giant Gemini. Unrealistic.
The main revenue growth comes from ARPU scaling not just user growth.
Crunching all the numbers from “High Intent” model, 2026 looks different.
* 35M paying subscribers: $8.4B minimum (conservatively assuming all at $20/mo Plus tier)
* Definitely higher with Pro ($200/mo) and Enterprise (custom pricing)
* ChatGPT does 2.5B prompts daily this is what advertisers would class as both higher engagement and higher intent than passive scrolling (although you can fit more ads in a scroll than a chat)
* Reality Check: This assumes they monetise typical search queries at rates Perplexity has already proven possible.
These projections use futuresearch.ai’s base forecast ($39B median for mid-2027, no ads) + advertising overlay from internal OpenAI docs + conservative user growth.
Ads were the key to unlocking profitability, you must’ve seen it coming, thanks to you not skipping that 3 minute health insurance ad - you, yes you helped us achieve AGI!
Mission alignment: Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible.
The A in AGI stands for Ads! It’s all ads!! Ads that you can’t even block because they are BAKED into the streamed probabilistic word selector purposefully skewed to output the highest bidder’s marketing copy.
Look on the bright side, if they’re turning to ads it likely means AGI is not on the horizon. Your job is safe!
It’s 4:41AM in London, I’m knackered. Idek if I’m gonna post this because I love AI and do agree that some things are a necessary evil to achieve a greater goal (AGI).
Nevertheless, if you have any questions or comments, shout me -> ossamachaib.cs@gmail.com.
...
Read the original on ossa-ma.github.io »
...
Read the original on www.presidentti.fi »
Believe it or not, A$AP Rocky is a huge fan of radiance fields.
Yesterday, when A$AP Rocky released the music video for Helicopter, many viewers focused on the chaos, the motion, and the unmistakable early MTV energy of the piece. What’s easier to miss, unless you know what you’re looking at, is that nearly every human performance in the video was captured volumetrically and rendered as dynamic splats.
I spoke with Evercoast, the team responsible for capturing the performances, as well as Chris Rutledge, the project’s CG Supervisor at Grin Machine, and Wilfred Driscoll of WildCapture and Fitsū.ai, to understand how Helicopter came together and why this project represents one of the most ambitious real world deployments of dynamic gaussian splatting in a major music release to date.
The decision to shoot Helicopter volumetrically wasn’t driven by technology for technology’s sake. According to the team, the director Dan Strait approached the project in July with a clear creative goal to capture human performance in a way that would allow radical freedom in post-production. This would have been either impractical or prohibitively expensive using conventional filming and VFX pipelines.
Chris told me he’d been tracking volumetric performance capture for years, fascinated by emerging techniques that could enable visuals that simply weren’t possible before. Two years ago, he began pitching the idea to directors in his circle, including Dan, as a “someday” workflow. When Dan came back this summer and said he wanted to use volumetric capture for the entire video, the proliferation of gaussian splatting enabled them to take it on.
The aesthetic leans heavily into kinetic motion. Dancers colliding, bodies suspended in midair, chaotic fight scenes, and performers interacting with props that later dissolve into something else entirely. Every punch, slam, pull-up, and fall you see was physically performed and captured in 3D.
Almost every human figure in the video, including Rocky himself, was recorded volumetrically using Evercoast’s system. It’s all real performance, preserved spatially.
This is not the first time that A$AP Rocky has featured a radiance field in one of his music videos. The 2023 music video for Shittin’ Me featured several NeRFs and even the GUI for Instant-NGP, which you can spot throughout the piece.
The primary shoot for Helicopter took place in August in Los Angeles. Evercoast deployed a 56 camera RGB-D array, synchronized across two Dell workstations. Performers were suspended from wires, hanging upside down, doing pull-ups on ceiling-mounted bars, swinging props, and performing stunts, all inside the capture volume.
Scenes that appear surreal in the final video were, in reality, grounded in very physical setups, such as wooden planks standing in for helicopter blades, real wire rigs, and real props. The volumetric data allowed those elements to be removed, recomposed, or entirely recontextualized later without losing the authenticity of the human motion.
Over the course of the shoot, Evercoast recorded more than 10 terabytes of raw data, ultimately rendering roughly 30 minutes of final splatted footage, exported as PLY sequences totaling around one terabyte.
That data was then brought into Houdini, where the post production team used CG Nomads GSOPs for manipulation and sequencing, and OTOY’s OctaneRender for final rendering. Thanks to this combination, the production team was also able to relight the splats.
One of the more powerful aspects of the workflow was Evercoast’s ability to preview volumetric captures at multiple stages. The director could see live spatial feedback on set, generate quick mesh based previews seconds after a take, and later review fully rendered splats through Evercoast’s web player before downloading massive PLY sequences for Houdini.
In practice, this meant creative decisions could be made rapidly and cheaply, without committing to heavy downstream processing until the team knew exactly what they wanted. It’s a workflow that more closely resembles simulation than traditional filming.
Chris also discovered that Octane’s Houdini integration had matured, and that Octane’s early splat support was far enough along to enable relighting. According to the team, the ability to relight splats, introduce shadowing, and achieve a more dimensional “3D video” look was a major reason the final aesthetic lands the way it does.
The team also used Blender heavily for layout and previs, converting splat sequences into lightweight proxy caches for scene planning. Wilfred described how WildCapture’s internal tooling was used selectively to introduce temporal consistency. In his words, the team derived primitive pose estimation skeletons that could be used to transfer motion, support collision setups, and allow Houdini’s simulation toolset to handle rigid body, soft body, and more physically grounded interactions.
One recurring reaction to the video has been confusion. Viewers assume the imagery is AI-generated. According to Evercoast, that couldn’t be further from the truth. Every stunt, every swing, every fall was physically performed and captured in real space. What makes it feel synthetic is the freedom volumetric capture affords. You aren’t limited by the camera’s composition. You have free rein to explore, reposition cameras after the fact, break spatial continuity, and recombine performances in ways that 2D simply can’t.
In other words, radiance field technology isn’t replacing reality. It’s preserving everything.
...
Read the original on radiancefields.com »
A Nobel Peace Prize laureate receives two central symbols of the prize: a gold medal and a diploma. In addition, the prize money is awarded separately. Regardless of what may happen to the medal, the diploma, or the prize money, it is and remains the original laureate who is recorded in history as the recipient of the prize. Even if the medal or diploma later comes into someone else’s possession, this does not alter who was awarded the Nobel Peace Prize.
A laureate cannot share the prize with others, nor transfer it once it has been announced. A Nobel Peace Prize can also never be revoked. The decision is final and applies for all time.
The Norwegian Nobel Committee does not see it as their role to engage in day-to-day commentary on Peace Prize laureates or the political processes that they are engaged in. The prize is awarded on the basis of the laureate’ contributions by the time that the committee’s decision is taken.
The Committee does not comment on laureates’ subsequent statements, decisions, or actions. Any ongoing assessments or choices made by laureates must be understood as their own responsibility.
There are no restrictions in the statutes of the Nobel Foundation on what a laureate may do with the medal, the diploma, or the prize money. This means that a laureate is free to keep, give away, sell, or donate these items.
A number of Nobel medals are displayed in museums around the world. Several Nobel laureates have also chosen to give away or sell their medals:
* Kofi Annan (Peace Prize 2001): In February 2024, his widow, Nane Annan, donated both the medal and the diploma to the United Nations Office in Geneva, where they are now permanently on display. She stated that she wished his legacy to continue inspiring future generations.
* Christian Lous Lange (Peace Prize 1921): The medal of Norway’s first Nobel Peace Prize laureate has been on long-term loan from the Lange family to the Nobel Peace Center in Oslo since 2005. It is now displayed in the Medal Chamber and is the only original Peace Prize medal permanently exhibited to the public in Norway.
* Dmitry Muratov (Peace Prize 2021): The Russian journalist sold his medal for USD 103.5 million in June 2022. The entire sum was donated to UNICEF’s fund for Ukrainian refugee children. This is the highest price ever paid for a Nobel Prize medal.
* David Thouless (Physics Prize 2016): His family donated the medal to Trinity Hall, University of Cambridge, where it is displayed to inspire students.
* James Watson (Medicine Prize 1962): In 2014, his medal was sold for USD 4.76 million. The controversial DNA researcher stated that parts of the proceeds would be used for research purposes. The medal was purchased by Russian billionaire Alisher Usmanov, who later returned it to Watson.
* Leon Lederman (Physics Prize 1988): He sold his medal in 2015 for USD 765,002 to cover medical expenses related to dementia.
* Knut Hamsun (Literature Prize 1920): In 1943, the Norwegian author Knut Hamsun travelled to Germany and met with Propaganda Minister Joseph Goebbels. After returning to Norway, he sent his Nobel medal to Goebbels as a gesture of thanks for the meeting. Goebbels was honoured by the gift. The present whereabouts of the medal are unknown.
...
Read the original on www.nobelpeaceprize.org »
Pay what you like
You write a document, hit save, and the file is on your computer. It’s yours. You can inspect it, you can send it to a friend, and you can open it with other apps.
Files come from the paradigm of personal computing.
This post, however, isn’t about personal computing. What I want to talk about is social computing—apps like Instagram, Reddit, Tumblr, GitHub, and TikTok.
What do files have to do with social computing?
But first, a shoutout to files.
Files, as originally invented, were not meant to live inside the apps.
Since files represent your creations, they should live somewhere that you control. Apps create and read your files on your behalf, but files don’t belong to the apps.
Files belong to you—the person using those apps.
Apps (and their developers) may not own your files, but they do need to be able to read and write them. To do that reliably, apps need your files to be structured. This is why app developers, as part of creating apps, may invent and evolve file formats.
A file format is like a language. An app might “speak” several formats. A single format can be understood by many apps. Apps and formats are many-to-many. File formats let different apps work together without knowing about each other.
SVG is an open specification. This means that different developers agree on how to read and write SVG. I created this SVG file in Excalidraw, but I could have used Adobe Illustrator or Inkscape instead. Your browser already knew how to display this SVG. It didn’t need to hit any Excalidraw APIs or to ask permissions from Excalidraw to display this SVG. It doesn’t matter which app has created this SVG.
The file format is the API.
Of course, not all file formats are open or documented.
Some file formats are application-specific or even proprietary like .doc. And yet, although .doc was undocumented, it didn’t stop motivated developers from reverse-engineering it and creating more software that reads and writes .doc:
Another win for the files paradigm.
The files paradigm captures a real-world intuition about tools: what we make with a tool does not belong to the tool. A manuscript doesn’t stay inside the typewriter, a photo doesn’t stay inside the camera, and a song doesn’t stay in the microphone.
Our memories, our thoughts, our designs should outlive the software we used to create them. An app-agnostic storage (the filesystem) enforces this separation.
You may create a file in one app, but someone else can read it using another app. You may switch the apps you use, or use them together. You may convert a file from one format to another. As long as two apps correctly “speak” the same file format, they can work in tandem even if their developers hate each others’ guts.
And if the app sucks?
Someone could always create “the next app” for the files you already have:
Apps may come and go, but files stay—at least, as long as our apps think in files.
See also: File over app
When you think of social apps—Instagram, Reddit, Tumblr, GitHub, TikTok—you probably don’t think about files. Files are for personal computing only, right?
But what if they behaved as files—at least, in all the important ways? Suppose you had a folder that contained all of the things ever POSTed by your online persona:
It would include everything you’ve created across different social apps—your posts, likes, scrobbles, recipes, etc. Maybe we can call it your “everything folder”.
Of course, closed apps like Instagram aren’t built this way. But imagine they were. In that world, a “Tumblr post” or an “Instagram follow” are social file formats:
* You posting on Tumblr would create a “Tumblr post” file in your folder.
* You following on Instagram would put an “Instagram follow” file into your folder.
* You upvoting on Hacker News would add an “HN upvote” file to your folder.
Note this folder is not some kind of an archive. It’s where your data actually lives:
Files are the source of truth—the apps would reflect whatever’s in your folder.
Any writes to your folder would be synced to the interested apps. For example, deleting an “Instagram follow” file would work just as well as unfollowing through the app. Crossposting to three Tumblr communities could be done by creating three “Tumblr post” files. Under the hood, each app manages files in your folder.
In this paradigm, apps are reactive to files. Every app’s database mostly becomes derived data—an app-specific cached materialized view of everybody’s folders.
This might sound very hypothetical, but it’s not. What I’ve described so far is the premise behind the AT protocol. It works in production at scale. Bluesky, Leaflet, Tangled, Semble, and Wisp are some of the new open social apps built this way.
It doesn’t feel different to use those apps. But by lifting user data out of the apps, we force the same separation as we’ve had in personal computing: apps don’t trap what you make with them. Someone can always make a new app for old data:
Like before, app developers evolve their file formats. However, they can’t gatekeep who reads and writes files in those formats. Which apps to use is up to you.
Together, everyone’s folders form something like a distributed social filesystem:
I’ve previously written about the AT protocol in Open Social, looking at its model from a web-centric perspective. But I think that looking at it from the filesystem perspective is just as intriguing, so I invite you to take a tour of how it works.
What does a social filesystem start with?
How would you represent it as a file?
It’s natural to consider JSON as a format. After all, that’s what you’d return if you were building an API. So let’s fully describe this post as a piece of JSON:
However, if we want to store this post as a file, it doesn’t make sense to embed the author information there. After all, if the author later changes their display name or avatar, we wouldn’t want to go through their every post and change them there.
So let’s assume their avatar and name live somewhere else—perhaps, in another file. We could leave author: ‘dril’ in the JSON but this is unnecessary too. Since this file lives inside the creator’s folder—it’s their post, after all—we can always figure out the author based on whose folder we’re currently looking at.
This seems like a good way to describe this post:
But wait, no, this is still wrong.
You see, replyCount, repostCount, and likeCount are not really something that the post’s author has created. These values are derived from the data created by other people—their replies, their reposts, their likes. The app that displays this post will have to keep track of those somehow, but they aren’t this user’s data.
So really, we’re left with just this:
Notice how it took some trimming to identify which parts of the data actually belong in this file. This is something that you have to be intentional about when creating apps with the AT protocol. My mental model for this is to think about the POST request. When the user created this thing, what data did they send? That’s likely close to what we’ll want to store. That’s the stuff the user has just created.
Our social filesystem will be structured more rigidly than a traditional filesystem. For example, it will only consist of JSON files. To make this more explicit, we’ll start introducing our new terminology. We’ll call this kind of file a record.
Now we need to give our record a name. There are no natural names for posts. Could we use sequential numbers? Our names need only be unique within a folder:
One downside is that we’d have to keep track of the latest one so there’s a risk of collisions when creating many files from different devices at the same time.
Instead, let’s use timestamps with some per-clock randomness mixed in:
This is nicer because these can be generated locally and will almost never collide.
We’ll use these names in URLs so let’s encode them more compactly. We’ll pick our encoding carefully so that sorting alphabetically goes in the chronological order:
Now ls -r gives us a reverse chronological timeline of posts! That’s neat. Also, since we’re sticking with JSON as our lingua franca, we don’t need file extensions.
Not all records accumulate over time. For example, you can write many posts, but you only have one copy of profile information—your avatar and display name. For “singleton” records, it makes sense to use a predefined name, like me or self:
By the way, let’s save this profile record to profiles/self:
Note how, taken together, posts/34qye3wows2c5 and profiles/self let us reconstruct more of the UI we started with, although some parts are still missing:
Before we fill them in, though, we need to make our system sturdier.
This was the shape of our post record:
And this was the shape of our profile record:
Since these are stored as files, it’s important for the format not to drift.
TypeScript seems convenient for this but it isn’t sufficient. For example, we can’t express constraints like “the text string should have at most 300 Unicode graphemes”, or “the createdAt string should be formatted as datetime”.
We need a richer way to define social file formats.
We might shop around for existing options (RDF? JSON Schema?) but if nothing quite fits, we might as well design our own schema language explicitly geared towards the needs of our social filesystem. This is what our Post looks like:
We’ll call this the Post lexicon because it’s like a language our app wants to speak.
My first reaction was also “ouch” but it helped to think that conceptually it’s this:
I used to yearn for a better syntax but I’ve actually come around to hesitantly appreciate the JSON. It being trivial to parse makes it super easy to build tooling around it (more on that in the end). And of course, we can make bindings turning these into type definitions and validation code for any programming language.
Our social filesystem looks like this so far:
The posts/ folder has records that satisfy the Post lexicon, and the profiles/ folder contains records (a single record, really) that satisfy the Profile lexicon.
This can be made to work well for a single app. But here’s a problem. What if there’s another app with its own notion of “posts” and “profiles”?
Recall, each user has an “everything folder” with data from every app:
Different apps will likely disagree on what the format of a “post” is! For example, a microblog post might have a 300 character limit, but a proper blog post might not.
Can we get the apps to agree with each other?
We could try to put every app developer in the same room until they all agree on a perfect lexicon for a post. That would be an interesting use of everyone’s time.
For some use cases, like cross-site syndication, a standard-ish jointly governed lexicon makes sense. For other cases, you really want the app to be in charge. It’s actually good that different products can disagree about what a post is! Different products, different vibes. We’d want to support that, not to fight it.
Really, we’ve been asking the wrong question. We don’t need every app developer to agree on what a post is; we just need to let anyone “define” their own post.
We could try namespacing types of records by the app name:
But then, app names can also clash. Luckily, we already have a way to avoid conflicts—domain names. A domain name is unique and implies ownership.
Why don’t we take some inspiration from Java?
This gives us collections.
A collection is a folder with records of a certain lexicon type. Twitter’s lexicon for posts might differ from Tumblr’s, and that’s fine—they’re in separate collections. The collection is always named like .
For example, you could imagine these collection names:
You could also imagine these slightly whackier collection names:
* fm.last.scrobble_v2 (breaking changes = new lexicon, just like file formats)
It’s like having a dedicated folder for every file extension.
To see some real lexicon names, check out UFOs and Lexicon Garden.
If you’re an application author, you might be thinking:
Who enforces that the records match their lexicons? If any app can (with the user’s explicit consent) write into any other app’s collection, how do we not end up with a lot of invalid data? What if some other app puts junk into “my” collection?
The answer is that records could be junk, but it still works out anyway.
It helps to draw a parallel to file extensions. Nothing stops someone from renaming cat.jpg to cat.pdf. A PDF reader would just refuse to open it.
Lexicon validation works the same way. The com.tumblr in com.tumblr.post signals who designed the lexicon, but the records themselves could have been created by any app at all. This is why apps always treat records as untrusted input, similar to POST request bodies. When you generate type definitions from a lexicon, you also get a function that will do the validation for you. If some record passes the check, great—you get a typed object. If not, fine, ignore that record.
So, validate on read, just like files.
Some care is required when evolving lexicons. From the moment some lexicon is used in the wild, you should never change which records it would consider valid. For example, you can add new optional fields, but you can’t change whether some field is optional. This ensures that the new code can still read old records and that the old code will be able to read any new records. There’s a linter to check for this. (For breaking changes, make a new lexicon, as you would do with a file format.)
Although this is not required, you can publish your lexicons for documentation and distribution. It’s like publishing type definitions. There’s no separate registry for those; you just put them into a com.atproto.lexicon.schema collection of some account, and then prove the lexicon’s domain is owned by you. For example, if I wanted to publish an io.overreacted.comment lexicon, I could place it here:
Then I’d need to do some DNS setup to prove overreacted.io is mine. This would make my lexicon show up in pdsls, Lexicon Garden, and other tools.
We’ve already decided that the profile should live in the com.twitter.profile collection, and the post itself should live in the com.twitter.post collection:
But what about the likes?
Actually, what is a like?
...
Read the original on overreacted.io »
Design is far more than form or function. It’s the tangible expression of a brand’s identity, values, and promise. While a brand defines what a company stands for, design gives those aspirations form and substance. Design uniquely delivers value: visually, physically, and experientially.
At ThinkNext Design, every creation begins with empathy and seeks purpose. We look to understand not just what people need, but what they desire. Whether crafting something entirely new or reimagining the familiar, our work blends aesthetic restraint with purposeful clarity.
The result is innovative design that resonates emotionally, performs beautifully, and endures as a reflection of the brand behind it. More than 200,000,000 ThinkPads have been sold since 1992, and still counting. That didn’t happen by accident.
By the early 1990′s, the original IBM AS/400 product line was rapidly losing market share due to a growing perception that the product family employed outdated technology, and was highly overpriced. David led a strategic design initiative to recast that image via a sweeping change that would forever reposition the status quo.
The resulting award winning design featured stark black enclosures, dramatic air inlets, and simple yet powerful forms. This was a striking contrast to the putty colored neutral appearance that had come to dominate not only the IBM server products, but the entire industry. Following the series introduction, AS/400 Division revenues jumped by a double-digit percentage. Comments of yesterday’s technology were quickly replaced by associations with objects such as the innovative F117a stealth fighter.
AS/400 systems had a control panel that included special functions that were designed to only be accessed by authorized operators. Restricted access was achieved using a traditional stainless steel keylock mated to a rotating electric switch. Without the key only basic functions could be operated. Unfortunately the assembly was very costly and the metal key/lock was a source of potential electrostatic discharge. The security keystick eliminated the dated and flawed assembly entirely. Inserting the asymmetrical key enabled access to the restricted functions, cost a fraction of the previous solution and eliminated the ESD issue altogether.
The soft rim and soft dome caps were added in 1997 creating a suite of Trackpoint cap options. The introduction followed an exhaustive design-led initiative to improve the existing cat tongue cap’s comfort and utility. The effort revealed that three caps were better than one, giving the user choice. All three were shipped with every ThinkPad for many years. Only the soft dome cap remains in production.
Prior to the introduction of the Netfinity 7000, IBM’s PC servers were tower based offerings that often found themselves awkwardly placed on shelves in generic computer racks. The Netfinity design eliminated this makeshift approach with a “rack and stack” solution. The system could truly rack mount using industry standard rails, or stand alone as a tower. The design also included a stacking NetBay with provision for mounting rack mounted OEM devices without purchasing a full blown rack. Many of the system components, including hardfiles, were removable from the front without tools.
The ThinkPad ThinkLight was first introduced on the ThinkPad i Series 1400. Observing a fellow airline passenger reading using a small light clipped to the top edge of their book, David immediately thought this idea could be adapted for use on a laptop. The final design used a white LED to illuminate the keyboard from the top bezel. It was the industry’s first, and arguably most effective method, of illuminating a laptop keyboard.
The introduction of the IBM Personal Computer in 1981 was a technology milestone that forever changed the world. Subsequent innovation, however, was primarily limited to technology advancements and improved affordability. In nearly 20 years, little had been done to dramatically change the design paradigm of metal box, chunky monitor, and keyboard. David initiated and led a design project to reinvent the standard.
Working in close collaboration with noted designer Richard Sapper, David and his team created an industry-leading all-in-one computer that capitalized on emerging flat-panel display technology. The final, award-winning design integrated the monitor, CPU, and optical drive into a remarkably slim profile. The optical drive was discreetly concealed within the base structure, dropping down smoothly at the touch of a button.
Bucking the trend for bloated, frivolous designs, the Aptiva S Series speakers were conceived to match the unique angular design language of the flat panel based computer design. The sophisticated desktop speakers could be customized with brightly colored fabric grills adding to the premium image. The design was selected by Dieter Rams for a Best of Category award at the annual IF Design Exhibition in Germany.
The ThinkPad X300 stands as a landmark in industrial design, proving how disciplined engineering and purposeful aesthetics can redefine an entire product category. Its carbon-fiber and magnesium construction, meticulously refined form, and forward-looking adoption of SSD storage and LED backlighting positioned it as a breakthrough ultraportable long before such features became commonplace. Its development earned widespread attention, most notably in BusinessWeek’s cover story “The Making of the ThinkPad X300,” which showcased the intense, design-driven effort behind the machine. The project was explored even more deeply in Steve Hamm’s book The Race for Perfect, which chronicled the X300’s creation as an example of ambitious, high-stakes innovation. Together, these accounts cement the X300’s legacy as one of the most influential and thoughtfully crafted ThinkPads ever made.
Skylight was an early “smartbook” product designed as a lightweight, always-connected device that blended elements of a smartphone and laptop. The imaginative overall product design was created by Richard Sapper, but the keyboard was the work of David and his team. Although the product was short-lived, the sculpted island style keyboard was eventually adopted for use on future ThinkPad and consumer laptops. The sculpted key surface and unique D-shape aid substantially in enhancing comfort and improving typing accuracy.
Shortly following the Lenovo acquisition of IBM’s PC business, the IBM logo was removed from ThinkPad. David was a strong proponent of establishing ThinkPad as the primary badge on the product due to the brand’s high recognition and subsequent value. He proposed using the sub-brand font, normally appearing below the IBM logo, as ThinkPad’s new wordmark. He enhanced it with a bright red dot over the letter i which was derived from the TrackPoint cap. His now iconic concept was universally adopted as the new ThinkPad product badge worldwide in 2007.
In 2010 the dot was enhanced with a glowing red LED that is still in use today. The dot glows solid if the ThinkPad is powered on and slowly pulses like a heartbeat when in a suspended sleep state. The design draws attention and adds life to the brand.
The first-generation ThinkPad X1 Carbon introduced a bold new interpretation of classic ThinkPad design. It’s carbon-fiber reinforced chassis delivered exceptional strength with a remarkably low weight. The sculpted island-style keyboard, subtle red accents, and gently tapered edges gave it a modern precision appearance without sacrificing the brand’s renowned usability & iconic visual impression.
The scaled-down travel mouse shares it’s essential geometry with a mouse originally created for IBM’s Aptiva lineup in the late 1990′s. The characteristically low front, generously sculpted tail and inwardly inclined side surfaces enhance ergonomics and daily use. These design concepts have been nearly universally adopted by other computer/accessory manufacturers.
When using a tablet as a camera the screen cover typically flops around since folding it all the way around would block the camera. The quickshot cover eliminates this inconvenience thanks to a patented folding corner. When folded back, it automatically launched the camera app to let you take a picture instantly. The flopping cover annoyance was eliminated.
The revolutionary design replaced the bezel/box paradigm with a form that resembles a rectangular tube through which large volumes of air pass. The unique appearance telegraphs raw power. The design, however, is much more than skin deep. The machine’s innovative interior is highly modular and eliminates the need for tools to replace or upgrade key components. Flush handles are thoughtfully incorporated in the shell for moving the workstation.
The pioneering ThinkPad X1 Tablet design featured a uniquely hinged kickstand that enabled customizing the user experience with a system of snap-on modules. Modules offered were the Productivity Module, which added extra battery life and additional ports; the Presenter Module, featuring a built-in pico projector for critical presentations; and the 3D Imaging Module, equipped with an Intel RealSense camera for depth sensing and 3D scanning. Together, these modules provided flexible, on-demand functionality while preserving the tablet’s portability.
ThinkPad 25 was created and launched to celebrate the 25th anniversary of the iconic brand. It artfully blended retro design elements with modern engineering. Inspired heavily by years of passionate customer feedback and social-media campaigns calling for a “classic ThinkPad” revival, the project brought back beloved features such as the 7-row keyboard with blue accents, a tradition-inspired ThinkPad logo, and TrackPoint cap options. Wrapped in a soft-touch black chassis and powered by contemporary hardware, the ThinkPad 25 stood as a collaborative tribute—shaped not only by Lenovo’s designers but also by a global community of fans.
Originally written and designed for the 20th anniversary celebration held at the MoMA. The highly collectable work was updated in 2025 for the 25th anniversary limited edition ThinkPad T25. Both booklets document and illuminate David Hill’s beliefs and philosophies that have shaped the design of ThinkPad for decades.
The ThinkPad ThinkShutter is a simple, built-in mechanical privacy cover designed to give users instant control over their webcam. Sliding smoothly across the lens, it provides a clear visual indication when the camera is physically blocked, eliminating reliance on questionable software controls or LED indicators. It integrates cleanly into the display bezel adding negligible thickness. Achieving peace of mind with makeshift solutions such as masking tape, Post-it notes, and even clothespins are a thing of the past.
...
Read the original on thinknextdesign.com »
This program generates images from text prompts (and optionally from other images) using the FLUX.2-klein-4B model from Black Forest Labs. It can be used as a library as well, and is implemented entirely in C, with zero external dependencies beyond the C standard library. MPS and BLAS acceleration are optional but recommended.
I (the human here, Salvatore) wanted to test code generation with a more ambitious task, over the weekend. This is the result. It is my first open source project where I wrote zero lines of code. I believe that inference systems not using the Python stack (which I do not appreciate) are a way to free open models usage and make AI more accessible. There is already a project doing the inference of diffusion models in C / C++ that supports multiple models, and is based on GGML. I wanted to see if, with the assistance of modern AI, I could reproduce this work in a more concise way, from scratch, in a weekend. Looks like it is possible.
This code base was written with Claude Code, using the Claude Max plan, the small one of ~80 euros per month. I almost reached the limits but this plan was definitely sufficient for such a large task, which was surprising. In order to simplify the usage of this software, no quantization is used, nor do you need to convert the model. It runs directly with the safetensors model as input, using floats.
Even if the code was generated using AI, my help in steering towards the right design, implementation choices, and correctness has been vital during the development. I learned quite a few things about working with non trivial projects and AI.
# Build (choose your backend)
make mps # Apple Silicon (fastest)
# or: make blas # Intel Mac / Linux with OpenBLAS
# or: make generic # Pure C, no dependencies
# Download the model (~16GB)
pip install huggingface_hub
python download_model.py
# Generate an image
./flux -d flux-klein-model -p “A woman wearing sunglasses” -o output.png
That’s it. No Python runtime, no PyTorch, no CUDA toolkit required at inference time.
Generated with: ./flux -d flux-klein-model -p “A picture of a woman in 1960 America. Sunglasses. ASA 400 film. Black and White.” -W 250 -H 250 -o /tmp/woman.png, and later processed with image to image generation via ./flux -d flux-klein-model -i /tmp/woman.png -o /tmp/woman2.png -p “oil painting of woman with sunglasses” -v -H 256 -W 256
* Zero dependencies: Pure C implementation, works standalone. BLAS optional for ~30x speedup (Apple Accelerate on macOS, OpenBLAS on Linux)
./flux -d flux-klein-model -p “A fluffy orange cat sitting on a windowsill” -o cat.png
./flux -d flux-klein-model -p “oil painting style” -i photo.png -o painting.png -t 0.7
The -t (strength) parameter controls how much the image changes:
The seed is always printed to stderr, even when random:
To reproduce the same image, use the printed seed:
make # Show available backends
make generic # Pure C, no dependencies (slow)
make blas # BLAS acceleration (~30x faster)
make mps # Apple Silicon Metal GPU (fastest, macOS only)
For make blas on Linux, install OpenBLAS first:
# Ubuntu/Debian
sudo apt install libopenblas-dev
# Fedora
sudo dnf install openblas-devel
make clean # Clean build artifacts
make info # Show available backends for this platform
make test # Run reference image test
The model weights are downloaded from HuggingFace:
pip install huggingface_hub
python download_model.py
Inference steps: This is a distilled model that produces good results with exactly 4 sampling steps.
The text encoder is automatically released after encoding, reducing peak memory during diffusion. If you generate multiple images with different prompts, the encoder reloads automatically.
* The C implementation uses float32 throughout, while PyTorch uses bfloat16 with highly optimized MPS kernels. The next step of this project is likely to implement such an optimization, in order to reach similar speed, or at least try to approach it.
* The generic (pure C) backend is extremely slow and only practical for testing at small sizes.
Dimensions should be multiples of 16 (the VAE downsampling factor).
The library can be integrated into your own C/C++ projects. Link against libflux.a and include flux.h.
Here’s a complete program that generates an image from a text prompt:
#include “flux.h”
#include
gcc -o myapp myapp.c -L. -lflux -lm -framework Accelerate # macOS
gcc -o myapp myapp.c -L. -lflux -lm -lopenblas # Linux
Transform an existing image guided by a text prompt. The strength parameter controls how much the image changes:
#include “flux.h”
#include
* 0.9 - Almost complete regeneration, keeps only composition
When generating multiple images with different seeds but the same prompt, you can avoid reloading the text encoder:
flux_ctx *ctx = flux_load_dir(“flux-klein-model”);
flux_params params = FLUX_PARAMS_DEFAULT;
params.width = 256;
params.height = 256;
/* Generate 5 variations with different seeds */
for (int i = 0; i < 5; i++) {
flux_set_seed(1000 + i);
flux_image *img = flux_generate(ctx, “A mountain landscape at sunset”, ¶ms);
char filename[64];
snprintf(filename, sizeof(filename), “landscape_%d.png”, i);
flux_image_save(img, filename);
flux_image_free(img);
flux_free(ctx);
Note: The text encoder (~8GB) is automatically released after the first generation to save memory. It reloads automatically if you use a different prompt.
All functions that can fail return NULL on error. Use flux_get_error() to get a description:
flux_ctx *ctx = flux_load_dir(“nonexistent-model”);
if (!ctx) {
fprintf(stderr, “Error: %s\n”, flux_get_error());
/* Prints something like: “Failed to load VAE - cannot generate images” */
return 1;
flux_ctx *flux_load_dir(const char *model_dir); /* Load model, returns NULL on error */
void flux_free(flux_ctx *ctx); /* Free all resources */
flux_image *flux_generate(flux_ctx *ctx, const char *prompt, const flux_params *params);
flux_image *flux_img2img(flux_ctx *ctx, const char *prompt, const flux_image *input,
const flux_params *params);
flux_image *flux_image_load(const char *path); /* Load PNG or PPM */
int flux_image_save(const flux_image *img, const char *path); /* 0=success, -1=error */
flux_image *flux_image_resize(const flux_image *img, int new_w, int new_h);
void flux_image_free(flux_image *img);
void flux_set_seed(int64_t seed); /* Set RNG seed for reproducibility */
const char *flux_get_error(void); /* Get last error message */
void flux_release_text_encoder(flux_ctx *ctx); /* Manually free ~8GB (optional) */
typedef struct {
int width; /* Output width in pixels (default: 256) */
int height; /* Output height in pixels (default: 256) */
int num_steps; /* Denoising steps, use 4 for klein (default: 4) */
float guidance_scale; /* CFG scale, use 1.0 for klein (default: 1.0) */
int64_t seed; /* Random seed, -1 for random (default: -1) */
float strength; /* img2img only: 0.0-1.0 (default: 0.75) */
} flux_params;
/* Initialize with sensible defaults */
#define FLUX_PARAMS_DEFAULT { 256, 256, 4, 1.0f, -1, 0.75f }
...
Read the original on github.com »
Wir nutzen auf unserer Internetseite das Open-Source-Software-Tool Matomo. Mit Matomo werden keine Daten an Server übermittelt, die außerhalb der Kontrolle des Bundespresseamts liegen.
Das Tool verwendet Cookies. Mit diesen Cookies können wir Besuche zählen. Diese Textdateien werden auf Ihrem Computer gespeichert und machen es dem Bundespresseamt möglich, die Nutzung seiner Webseite zu analysieren. Ihre IP-Adresse ist für uns eine anonyme Kennung; wir haben keine technische Möglichkeit, Sie damit als angemeldeten Nutzer zu identifizieren. Sie bleiben als Nutzer anonym.
Wenn Sie mit der Auswertung Ihrer Daten einverstanden sind, dann aktivieren Sie bitte diesen Cookie.
...
Read the original on www.bundesregierung.de »
...
Read the original on gitlab.winehq.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.