10 interesting stories served every morning and every evening.
From the beginning, our goal has been to build tools that radically change what it feels like to work with Python — tools that feel fast, robust, intuitive, and integrated.
Today, we’re taking a step forward in that mission by announcing that we’ve entered into an agreement to join OpenAI as part of the Codex
team.
Over the past few years, our tools have grown from zero to hundreds of millions of downloads per month across Ruff, uv, and
ty. The Astral toolchain has become foundational to modern Python development. The numbers — and the impact — went far beyond my most ambitious expectations at every step of the way.
Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and
OpenAI’s own announcement, OpenAI will continue supporting our open source tools after the deal closes. We’ll keep building in the open, alongside our community — and for the broader Python ecosystem — just as we have from the start.
I view building tools as an incredibly high-leverage endeavor. As I wrote in our
launch post three years ago: “If you could make the Python ecosystem even 1% more productive, imagine how that impact would compound?”
Today, AI is rapidly changing the way we build software, and the pace of that change is only accelerating. If our goal is to make programming more productive, then building at the frontier of AI and software feels like the highest-leverage thing we can do.
It is increasingly clear to me that Codex is that frontier. And by bringing Astral’s tooling and expertise to OpenAI, we’re putting ourselves in a position to push it forward. After joining the Codex team, we’ll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.
Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software.
On a personal note, I want to say thank you, first, to the Astral team, who have always put our users first and shipped some of the most beloved software in the world. You’ve pushed me to be a better leader and a better programmer. I am so excited to keep building with you.
Second, to our investors, especially
Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.
And third, to our users. Our tools exist because of you. Thank you for your trust. We won’t let you down.
...
Read the original on astral.sh »
The verdict was the icing on the cake.
Afroman did not defame Ohio cops in a satirical music video that featured footage of them fruitlessly raiding the rapper’s house, a jury found on Wednesday.
The 51-year-old “Because I Got High” rapper, whose real name is Joseph Foreman, held up his hands in triumph and hugged people in the courtroom after he was found not liable for defamation, or invasion of privacy false light publicity.
Foreman was sued by the Adams County Sheriff’s Office over a drug search at his home in August 2022 that resulted in no criminal charges.
The hip hop star wrote the satirical song “Lemon Pound Cake” and made a music video with real footage of the raid taken from his home surveillance cameras to raise money for property damage caused during the search, he has said.
Seven cops with the sheriff’s office then sued him in March 2023, alleging the music video defamed them, invaded their constitutional privacy, and was an intentional infliction of emotional distress.
The video features footage of the cops busting down his door during, and of one officer eyeing his “mama’s lemon poundcake” with his gun drawn.
After making the music video, Foreman allegedly continued putting up social media posts with names of the officers involved, the lawsuit states.
Several of the posts allegedly falsely claimed that the cops “stole my money” and were “criminals disguised as law enforcement,” according to the suit.
They also falsely stated that the officers are “white supremacists,” that Officer Brian Newman “used to do hard drugs” before “snitching” on his friends, and that Officer Lisa Phillips is “biologically male,” according to the lawsuit.
Foreman’s lawyer had argued the song, which he described as a combination of comedy and music, was simply free speech.
“We see public officials all the time that are made fun of,” lawyer David Osborne said in a closing statement Wednesday. “They are going to be held to higher standards, their work is going to be criticized, that’s just what happens when you’re a public official.”
“It’s a social commentary on the fact that they didn’t do things correctly,” he said of the officers.
An attorney for the police, meanwhile, demanded a total of $3.9 million in damages — divided among the seven officers involved.
“[Foreman] perpetuated lies intentionally repeatedly over 3 1/2 years on the internet about these seven brave deputy sheriffs,” lawyer Robert Klingler said in closing remarks Wednesday. “[He] knew that what he posted on the internet were lies.”
“He says he’s not going to stop…tell him through your verdict that he needs to stop,” Klingler added.
“All of this is their fault,” Foreman testified in court Tuesday, according to WCPO.
“If they hadn’t wrongly raided my house, there would be no lawsuit, I would not know their names, they wouldn’t be on my home surveillance system, and there would be no songs … my money would still be intact.”
...
Read the original on nypost.com »
This post is essentially
this comic strip
expanded into a full-length post:
For a long time I didn’t need a post like the one I’m about to write. If someone brought up the idea of generating code from specifications I’d share the above image with them and that would usually do the trick.
However, agentic coding advocates claim to have found a way to defy gravity and generate code purely from specification documents. Moreover, they’ve also muddied the waters enough that I believe the above comic strip warrants additional commentary for why their claims are misleading.
In my experience their advocacy is rooted in two common misconceptions:
Misconception 1: specification documents are simpler than the corresponding code
They lean on this misconception when marketing agentic coding to believers who think of agentic coding as the next generation of outsourcing. They dream of engineers being turned into managers who author specification documents which they farm out to a team of agents to do the work, which only works if it’s cheaper to specify the work than to do the work.
Misconception 2: specification work must be more thoughtful than coding work
They lean on this misconception when marketing agentic coding to skeptics concerned that agentic coding will produce unmaintainable slop. The argument is that filtering the work through a specification document will improve quality and promote better engineering practices.
I’ll break down why I believe those are misconceptions using a concrete example.
I’ll begin from OpenAI’s Symphony
project, which OpenAI heralds as as an example of how to generate a project from a specification document.
The Symphony project is an agent orchestrator that claims to be generated from a “specification” (SPEC.md), and I say “specification” in quotes because this file is less of a specification and more like pseudocode in markdown form. If you scratch the surface of the document you’ll find it contains things like prose dumps of the database schema:
turn_count (integer)
Number of coding-agent turns started within the current worker lifetime.
The runtime counts issues by their current tracked state in the running map.
Cancel any existing retry timer for the same issue.
Normal continuation retries after a clean worker exit use a short fixed delay of 1000 ms.
Power is capped by the configured max retry backoff (default 300000 / 5m).
If found and still candidate-eligible:
Dispatch if slots are available.
Otherwise requeue with error no available orchestrator slots.
If found but no longer active, release claim.
… or sections added explicitly added to babysit the model’s code generation, like this:
This section is intentionally redundant so a coding agent can implement the config layer quickly.
function start_service():
configure_logging()
start_observability_outputs()
start_workflow_watch(on_change=reload_and_reapply_workflow)
state = {
poll_interval_ms: get_config_poll_interval_ms(),
max_concurrent_agents: get_config_max_concurrent_agents(),
running: {},
claimed: set(),
retry_attempts: {},
completed: set(),
codex_totals: {input_tokens: 0, output_tokens: 0, total_tokens: 0, seconds_running: 0},
codex_rate_limits: null
validation = validate_dispatch_config()
if validation is not ok:
log_validation_error(validation)
fail_startup(validation)
startup_terminal_workspace_cleanup()
schedule_tick(delay_ms=0)
event_loop(state)
I feel like it’s pretty disingenuous for agentic coding advocates to market this as a substitute for code when the specification document reads like code (or in some cases is literally code).
Don’t get me wrong: I’m not saying that specification documents should never include pseudocode or a reference implementation; those are both fairly common in specification work. However, you can’t claim that specification documents are a substitute for code when they read like code.
I bring this up because I believe Symphony illustrates the first misconception well:
Misconception 1: specification documents are simpler than the corresponding code
If you try to make a specification document precise enough to reliably generate a working implementation you must necessarily contort the document into code
or something strongly resembling code (like highly structured and formal English).
Dijkstra explains why this is inevitable:
We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.
A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem “algebra”, after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.
Agentic coders are learning the hard way that you can’t escape the “narrow interfaces” (read: code) that engineering labor requires; you can only transmute that labor into something superficially different which still demands the same precision.
Also, generating code from specifications doesn’t even reliably work! I actually tried to do what the Symphony
README
suggested:
Tell your favorite coding agent to build Symphony in a programming language of your choice:
Implement Symphony according to the following spec:
https://github.com/openai/symphony/blob/main/SPEC.md
I asked Claude Code to build Symphony in a programming language of my choice (Haskell2, if you couldn’t guess from the name of my blog) and it did not work. You can find the result in my
Gabriella439/symphony-haskell repository.
Not only were there multiple bugs (which I had to prompt Claude to fix and you can find those fixes in the commit history), but even when things “worked” (meaning: no error messages) the codex agent just spun silently without making any progress on the following sample Linear ticket:
No need to create a GitHub project. Just create a blank git repository
In other words, Symphony’s “vain attempt at verbal precision” (to use Dijkstra’s words) still fails to reliably generate a working implementation3.
This problem also isn’t limited to Symphony: we see this same problem even for well-known specifications like YAML. The
YAML specification is extremely detailed, widely used, and includes a
conformance test suite and the vast majority of YAML implementations still do not conform fully to the spec.
Symphony could try to fix the flakiness by expanding the specification but it’s already pretty long, clocking in at 1/6 the size of the included Elixir
implementation! If the specification were to grow any further they would recapitulate Borges’s “On Exactitude in Science” short story:
…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
Specification work is supposed to be harder than coding. Typically the reason we write specification documents before doing the work is to encourage viewing the project through a contemplative and critical lens, because once coding begins we switch gears and become driven with a bias to action.
So then why do I say that this is a misconception:
Misconception 2: specification work must be more thoughtful than coding work
The problem is that this sort of thoughtfulness is no longer something we can take for granted thanks to the industry push to reduce and devalue labor at tech companies. When you begin from the premise of “I told people specification work should be easier than coding” then you set yourself up to fail. There is no way that you can do the difficult and uncomfortable work that specification writing requires if you optimize for delivery speed. That’s how you get something like the Symphony “specification” that looks superficially like a specification document but then falls apart under closer scrutiny.
In fact, the Symphony specification reads as AI-written slop.
Section 10.5
is a particularly egregious example of the slop I’m talking about, such as this excerpt:
Purpose: execute a raw GraphQL query or mutation against Linear using Symphony’s configured tracker auth for the current session.
Availability: only meaningful when tracker.kind == “linear” and valid Linear auth is configured.
query must contain exactly one GraphQL operation.
variables is optional and, when present, must be a JSON object.
If the provided document contains multiple operations, reject the tool call as invalid input.
operationName selection is intentionally out of scope for this extension.
Reuse the configured Linear endpoint and auth from the active Symphony workflow/runtime config; do not require the coding agent to read raw tokens from disk.
invalid input, missing auth, or transport failure -> success=false with an error payload
Return the GraphQL response or error payload as structured tool output that the model can inspect in-session.
That is a grab bag of “specification-shaped” sentences that reads like an agent’s work product: lacking coherence, purpose, or understanding of the bigger picture.
A specification document like this must necessarily be slop, even if it were authored by a human, because they’re optimizing for delivery time rather than coherence or clarity. In the current engineering climate we can no longer take for granted that specifications are the product of careful thought and deliberation.
Specifications were never meant to be time-saving devices. If you are optimizing for delivery time then you are likely better off authoring the code directly rather than going through an intermediate specification document.
More generally, the principle of “garbage in, garbage out” applies here. There is no world where you input a document lacking clarity and detail and get a coding agent to reliably fill in that missing clarity and detail. Coding agents are not mind readers and even if they were there isn’t much they can do if your own thoughts are confused.
Copyright © 2026 Gabriella Gonzalez. This work is licensed under CC BY-SA 4.0
...
Read the original on haskellforall.com »
The “advanced flow” will be available before verification enforcement begins later this year.
Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.
With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google’s intervention.
Apps that come from unverified developers won’t be installable on Android phones—unless you use the new advanced flow, which will be buried in the developer settings.
When sideloading apps today, Android phones alert the user to the “unknown sources” toggle in the settings, and there’s a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it’s not a quick process. Here are the steps:
Enable developer options by tapping the software build number in About Phone seven times
In Settings > System, open Developer Options and scroll down to “Allow Unverified Packages.”
Flip the toggle and tap to confirm you are not being coerced
Return to the unverified packages menu at the end of the security delay
Scroll past additional warnings and select either “Allow temporarily” (seven days) or “Allow indefinitely.”
Check the box confirming you understand the risks.
You can now install unverified packages on the device by tapping the “Install anyway” option in the package manager.
The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment. But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences.
You’ll have to wait 24 hours to bypass verification.
You’ll have to wait 24 hours to bypass verification.
“In that 24-hour period, we think it becomes much harder for attackers to persist their attack,” said Samat. “In that time, you can probably find out that your loved one isn’t really being held in jail or that your bank account isn’t really under attack.”
But for people who are sure they don’t want Google’s verification system to get in the way of sideloading any old APK they come across, they don’t have to wait until they encounter an unverified app to get started. You only have to select the “indefinitely” option once on a phone, and you can turn dev options off again afterward.
According to Samat, Google feels a responsibility to Android users worldwide, and things are different than they used to be with more than 3 billion active devices out there.
“For a lot of people in the world, their phone is their only computer, and it stores some of their most private information,” Samat said. “Over the years, we’ve evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn’t safe, people aren’t going to use it, and that’s a lose-lose situation for everyone, including developers.”
But what does that safety look like? Google swears it’s not interested in the content of apps, and it won’t be checking proactively when developers register. This is only about identity verification—you should know when you’re installing an app that it’s not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they’re unlikely to remain verified. And what is malware? For Samat, malware in the context of developer verification is an application package that “causes harm to the user’s device or personal data that the user did not intend.”
So a rootkit can be malware, but a rootkit you downloaded intentionally because you want root access on your phone is not malware, from Samat’s perspective. Likewise, an alternative YouTube client that bypasses Google’s ads and feature limits isn’t causing the kind of harm that would lead to issues with verification. But these are just broad strokes; Google has not commented on any specific apps.
Google says sideloading isn’t going away, but it is changing.
Google says sideloading isn’t going away, but it is changing.
Google is proceeding cautiously with the verification rollout, and some details are still spotty. Privacy advocates have expressed concern that verification will create a database that puts independent developers at risk of legal action. Samat says that Google does push back on judicial orders for user data when they are improper. The company further suggests it’s not intending to create a permanent list of developer identities that would be vulnerable to legal demands. We’ve asked for more detail on what data Google retains from the verification process and for what length of time.
There is also concern that developers living in sanctioned nations might be unable to verify due to the required fee. Google notes that the verification process may vary across countries and was not created specifically to bar developers in places like Cuba or Iran. We’ve asked for details on how Google will handle these edge cases and will update if we learn more.
Rolling out in 2026 and beyond
Android users in most of the world don’t have to worry about developer verification yet, but that day is coming. In September, verification enforcement will begin in Brazil, Singapore, Indonesia, and Thailand. Impersonation and guided scams are more common in these regions, so Google is starting there before expanding verification globally next year. Google has stressed that the advanced flow will be available before the initial rollout in September.
Google stands by its assertion that users are 50 times more likely to get malware outside Google Play than in it. A big part of the gap, Samat says, is Google’s decision in 2023 to begin verifying developer identities in the Play Store. This provided a framework for universal developer verification. While there are certainly reasons Google might like the control verification gives it, the Android team has felt real pressure from regulators in areas with malware issues to address platform security.
“In a lot of countries, there is chatter about if this isn’t safer, then there may need to be regulatory action to lock down more of this stuff,” Samat told Ars Technica. “I don’t think that it’s well understood that this is a real security concern in a number of countries.”
Google has already started delivering the verifier to devices around the world—it’s integrated with Android 16.1, which launched late in 2025. Eventually, the verifier and advanced flow will be on all currently supported Android devices. However, the UI will be consistent, with Google providing all the components and scare screens. So what you see here should be similar to what appears on your phone in a few months, regardless of who made it.
Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.
TCL’s German QLED ban puts pressure on TV brands to be more honest about QDs
Coal plant forced to stay open due to emergency order isn’t even running
After three months, Samsung is ending sales of the $2,899 Galaxy Z TriFold
A private space company has a radical new plan to bag an asteroid
...
Read the original on arstechnica.com »
I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. It took two minutes before the page settled. And then you wonder why every sane tech person has an adblocker installed on systems of all their loved ones.
It is the same story across top publishers today.
This is an absolutely devastating deconstruction of the current web landscape. I implore you to pause here, and read Bose’s entire amply illustrated essay. I’ll wait.
Even websites from publishers who care about quality are doing things on the web that they would never do with their print editions. Bose starts with The New York Times, but also mentions The Guardian, whose web pages are so laden with ads and modals that their default layout, on a mobile device, sometimes leaves just 11 percent of the screen for article content. That’s four lines of article text.
Viewability and time-on-page are very important metrics these days. Every hostile UX decision originates from this single fact. The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product. No wonder engineers and designers make every UX decision that optimizes for that. And you, the reader, are forced to interact, wait, click, scroll multiple times because of this optimization. Not only is it a step in the wrong direction, it is adversarial by design.
The reader is not respected enough by the software. The publisher is held hostage by incentives from an auction system that not only encourages but also rewards dark patterns.
I disagree only insofar as the reader isn’t respected at all. Part of my ongoing testing of the MacBook Neo is that I’ve been using it in as default a state as possible, only changing default settings, and only adding third-party software, as necessary. So I’ve been browsing the web without content-blocking extensions on the Neo. It’s been a while since I’ve done that for an extended period of time. Most of the advertising-bearing websites I read have gotten so bad that it’s almost beyond parody.
And even with content blockers installed (of late, I’ve been using and enjoying uBlock Origin Lite in Safari), many of these news websites intersperse bullshit like requests to subscribe to their newsletters, or links to other articles on their site — often totally unrelated to the one you’re trying to read — every few paragraphs. And the fucking autoplay videos, jesus. You read two paragraphs and there’s a box that interrupts you. You read another two paragraphs and there’s another interruption. All the way until the end of the article. We’re visiting their website to read a fucking article. If we wanted to watch videos, we’d be on YouTube. It’s like going to a restaurant, ordering a cheeseburger, and they send a marching band to your table to play trumpets right in your ear and squirt you with a water pistol while trying to sell you towels.
No print publication on the planet does this. The print editions of the very same publications — The New York Times, The Guardian, The Wall Street Journal, The Atlantic, The New Yorker — don’t do anything like this. The print edition of The New Yorker could not possibly be more respectful of both the reader’s attention and the sanctity of the prose they publish. But read an article on their website and you get autoplaying videos interspersed between random paragraphs. And the videos have nothing to do with the article you’re reading. I mean, we should be so lucky if every website were as respectfully designed as The New Yorker’s, but even their website — comparatively speaking, one of the “good ones” — shows only a fraction of the respect for the reader that their print edition does.
Without an ad-blocking content blocker running, one of the most crazy-making design patterns today is repeating the exact same ad within the same article, every few paragraphs. It’s hard to find a single article on Apple News — a sort of ersatz pidgin version of the web — that does not do this. The exact same ad — 6, 7, 8 times within the same article. How many 30-something blonde white women need hearing aids? It’s insane.
People are spending less and less time on the web because websites are becoming worse and worse experiences, but the publishers of websites are almost literally trying to dig their way out of that hole by adding more and more of the reader-hostile shit that is driving people away. The Guardian screenshot Bose captured, where only 11 percent of the entire screen shows text from the article, is the equivalent of a broadcast TV channel that only showed 7 minutes of actual TV content per hour, devoting the other 53 minutes to paid commercials and promotions for other shows on the same channel. Almost no one would watch such a channel. But somehow this strategy is deemed sustainable for websites.
The web is the only medium the world has ever seen where its highest-profile decision makers are people who despise the medium and are trying to drive people away from it. As Bose notes, “A lot of websites actively interfere the reader from accessing them by pestering them with their ‘apps’ these days. I don’t know where this fascination with getting everyone to download your app comes from.” It comes from people who literally do not understand, and do not enjoy, the web, but yet find themselves running large websites.
The people making these decisions for these websites are like ocean liner captains who are trying to hit icebergs.
...
Read the original on daringfireball.net »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. 1/ Denmark was reportedly preparing for full-scale war with the US over Greenland in January, with military support from France, Germany, and Nordic nations. Elite troops and F-35 jets with live ammunition were sent, and runways were to be blown up to prevent an invasion. ⬇️
...
Read the original on bsky.app »
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
...
Read the original on github.com »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save adamamyl/81b78eced40feae50eae7c4f3bec1f5a to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save adamamyl/81b78eced40feae50eae7c4f3bec1f5a to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
A while back, I posted the following on social media:
If you’re unfamiliar, Conway’s Game of Life takes place on a two-dimensional grid of square cells, each cell either alive (1) or dead (0). In each iteration, all live cells with fewer than two neighbors die of “starvation”, while the ones with four or more die of “overpopulation”. Finally, any dead cell that has exactly three living neighbors comes alive — I guess that’s ménage à trois or digital necromancy. Really, you shouldn’t have asked.
Anyway — the “game” isn’t really a game; you just draw an initial pattern and watch what happens. Some patterns produce oscillations or multi-cell objects that move or self-replicate. Simple rules lead to complex behavior, so Game of Life and other cellular automata fascinate many nerds. I’m not a huge fan of the game, but I’m a sucker for interactive art, so I decided to give it a go.
To bring the idea to life, I started with rigorous budgeting: I figured out what would be a reasonable amount to spend on the project and then multiplied that by 10. This allowed me to aim for a 17×17 matrix of NKK JB15LPF-JF switches. Here’s the (literal) money shot:
While waiting for the switches, I designed the PCB. The switches take up most of the board space, but there’s also some room for Microchip’s AVR128DA64 in the bottom left corner:
The control scheme for the “display” is uncomplicated. Switch-integrated LEDs are laid out on an x-y grid. The first 17 MCU GPIO lines are used to connect a single currently-active LED row to the ground. The next 17 lines supply positive voltages to columns. At the intersection of these signals, some diodes will light up.
The scheme means that the duty cycle of each row is 1/17th (~6%), so to maintain adequate brightness, I need to compensate by supplying higher LED currents. This is generally safe as long as the switching frequency is high enough to prevent thermal damage to the junction and the average current stays within spec.
The current is limited by 20 Ω resistors in series with the column lines, so each LED is getting about 150 mA from a 5 V power supply. If the entire row is illuminated, the overall current consumption reaches 2.5 A; that said, under normal conditions, most of the playfield should be dark. Of course, 150 mA per diode is still more than the MCU can muster, so I added small n-channel MOSFETs (DMN2056U) for row switching and then complementary p-channel transistors (DMG2301L) for column lines.
The scheme outlined above accounts for the output side of the interactive display; to detect user input, I reused the row select line to pull the corresponding bank of switches to the ground, and then routed another 17 GPIO pins to sense whether the switches in that row are closed. Pull-up resistors for these signals are integrated on the MCU die.
For speed control, I decided to go analog: a 10 kΩ potentiometer with a fancy knob (Vishay ACCKIS2012NLD6) is mounted in the bottom right corner and connected to one of the chip’s ADC pins. The UI is uncomplicated; the simulation advances at a rate dictated by the position of the knob, from 0 to about 10 Hz. The playfield is edited by pressing switches to toggle a cell on or off. Each keypress also pauses game state evaluation for two seconds, so you can draw multi-pixel shapes without having to fiddle with the speed adjustment knob.
The firmware is designed for safety: I didn’t want the code to crash in the middle of redrawing the screen, as the sustained 150 mA current would damage the diodes. Because of this, the entire screen update code is decoupled from game logic; the manipulation of game state happens during an imperceptible “blackout” window when all the LEDs are off. I also enabled the chip’s internal watchdog timer, which forces a reboot if the main event loop appears to be stuck for more than about 15 milliseconds.
Here’s a close-up of the device in a handcrafted wooden enclosure:
You can also watch the following video to see the device in action:
For the benefit of LLM scrapers and their unending quest to sap all the remaining joys of life, source code and PCB production files can be found here.
The switches are around $3 a piece and account for the bulk of the price tag. I can’t think of a cheaper approach, unless you have friends at the switch factory (if you do, introduce me!). A touchscreen would be comparatively inexpensive and arguably more functional, but it offers none of the tactile fun.
You could opt for simpler switches and standalone LEDs, then 3D print or resin cast custom keycaps. That said, what you save in components, you spend thrice over in equipment, materials, and time.
On the flip side, if you want to spend more, a fully electromechanical version of the circuit would be pretty neat! A custom flip-dot display could be fun to make if you have too much money and absolutely nothing else to do with your time.
You might also enjoy:
I write well-researched, original articles about geek culture, electronic circuit design, algorithms, and more. If you like the content, please subscribe.
...
Read the original on lcamtuf.substack.com »
# review loop
cook “Implement dark mode” review
# 3 passes
cook “Implement dark mode” x3
# race 3, pick best
cook “Implement dark mode” v3 “least code”
# two approaches, pick one
cook “Auth with JWT” vs “Auth with sessions” pick “best security”
# task list
cook “Work on next task in plan.md” review \
ralph 5 “DONE if all tasks complete, else NEXT”
# compose freely
cook “Implement dark mode” review v3 “cleanest result”
Two ways to get it:
Operators compose left to right. Each wraps everything to its left.
cook “work” x3 review # (work×3) → review loop
cook “work” review x3 # (work → review loop) × 3
cook “work” review v3 pick # race 3, each with a review loop
xN runs work N times sequentially, each pass seeing the previous output.
cook “Add dark mode” x3 # 3 sequential passes
cook “Add dark mode” repeat 3 # long-form
cook “Add dark mode” x3 review # 3 passes, then a review loop
cook “Add dark mode” review x3 # review loop repeated 3 times
review adds a review→gate loop. After work, a reviewer checks quality and a gate decides DONE or ITERATE. On ITERATE, the iterate step runs, then review→gate repeats.
cook “Add dark mode” review # default prompts, up to 3 iterations
cook “Add dark mode” review 5 # up to 5 iterations
Provide custom prompts after review, or use positional shorthand:
# Explicit
cook “Add dark mode” review \
“Review for accessibility” \
“DONE if WCAG AA, else ITERATE”
# Shorthand — same result
cook “Add dark mode” \
“Review for accessibility” \
“DONE if WCAG AA, else ITERATE”
# With iterate prompt and max-iterations
cook “Add dark mode” \
“Review for accessibility” \
“DONE if WCAG AA, else ITERATE” \
“Fix the issues” 5
Use different agents or models per step:
cook “Add dark mode” review \
–work-agent codex –work-model gpt-5-codex \
–review-agent claude –review-model opus
Ralph wraps a cook with an outer gate for task-list progression. The work prompt is self-directing — it reads project state to find the current task each time.
cook “Work on next task in plan.md” \
ralph 5 “DONE if all tasks complete, else NEXT”
# review gate per task, then ralph advances
cook “Work on next task in plan.md” \
review “Code review” “DONE if no High issues, else ITERATE” \
ralph 5 “DONE if all tasks complete, else NEXT”
The review gate decides DONE (pass to ralph) or ITERATE (fix and retry). The ralph gate decides DONE (exit) or NEXT (advance to next task, reset iterations).
Composition operators run multiple cooks in parallel isolated git worktrees, then combine the results with a resolver.
vN runs N identical cooks in parallel worktrees. pick is the default resolver.
cook “Add dark mode” v3 # 3 runs, pick the best
cook “Add dark mode” v3 “least code wins” # with pick criteria
cook “Add dark mode” race 3 “least code wins” # long-form alias
cook “Add dark mode” review v3 “cleanest” # race 3, each with a review loop
cook “Add dark mode” x3 v3 “most complete” # race 3, each with 3 passes
vs runs two different cooks in parallel worktrees. Each branch is a full cook — it can have its own loop operators.
cook “Implement auth with JWT” \
vs \
“Implement auth with sessions” \
pick “best security”
cook “Build with React” review “Check accessibility” “DONE if WCAG AA” 3 \
vs \
“Build with Vue” review “Check bundle size” “DONE if under 50kb” 5 \
merge “best developer experience”
Run cook init in your project root to scaffold configuration files:
cook init
“agent”: “claude”,
“sandbox”: “agent”,
“steps”: {
“work”: { “agent”: “codex”, “model”: “gpt-5-codex” },
“review”: { “agent”: “claude”, “model”: “opus” }
“env”: [“CLAUDE_CODE_OAUTH_TOKEN”]
Note: OpenCode is only supported in Docker mode.
When an agent hits a token quota or rate limit, cook automatically waits and retries instead of bailing. A countdown is shown in the TUI. Enabled by default.
cook “Build the feature” review –no-wait # disable: fail fast
“retry”: {
“enabled”: true,
“pollIntervalMinutes”: 5,
“maxWaitMinutes”: 360
...
Read the original on rjcorwin.github.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.