Hacker Newsnew | past | comments | ask | show | jobs | submit | mbreese's commentslogin

Same. I know I have a couple someplace in a bin. That and another embedded card from the era, but I think it had something like a DIMM footprint. I thought it was also Dallas semi, but I can’t find it or remember what it is though…

I remember thinking that some of the tracking features (temperature) of the button would be helpful in some situations. But the ring was the crazy model. Between these and smart cards, authentication was starting to look futuristic. I even remember getting a smart card reader from my credit card company. They thought it would make for more secure web transactions.

I’ve still seen some iButtons in the wild in odd places. Most recently, I saw them tracking car keys at dealerships. The last car I test drove had a key attached to a fob with an iButton. I was more excited by the iButton tracker than the car.

But I thought of it as an example of how long lasting some design decisions can really be. I’m sure someone designed this system 20-25 years ago and it is still in service today. I’m sure today it would be NFC. But now I’m thinking about what the iButton of 2050 will look like.


I used to see this is bash scripts all the time. It’s somewhat gone out of favor (along with using long bash scripts).

If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.

    read -p “Password: “ PASSWD
    # do something with $PASSWD
    PASSWD=“XXXXXXXXXXXXXXXXXX”
It’s not pretty, but a similar concept. (I also don't know how helpful it actually is, but that's another question...)

I wonder how robust the solder joints are for castellated boards. I’d still imagine that to be a weak point vibration-wise. Definitely easier to automate, but would it be that much more robust?

Thinking about those CM sockets and I think the answer is yes - a castellated solder joint (is that the right term?) would be stronger. But other sockets might be more robust than the CM0.


I’ve never heard it described this way: AGI as similar to human flight. I think it’s subtle and clever - my two most favorite properties.

To me, we have both achieved and not human flight. Can humans themselves fly? No. Can people fly in planes across continents. Yes.

But, does it really matter if it counts as “human flight” if we can get from point A to point B faster? You’re right - this is an argument that will last ages.

It’s a great turn of phrase to describe AGI.


Thank you! I’m bored of “moving goalposts” arguments as I think “looks different than we expected” is the _ordinary_ way revolutions happen.

Isn’t that for scraping? I think this is for injecting (or making that possible) to add an MCP front end to a site.

Different use cases, I think.


It is the same. Context7 is using LLMs.txt to create a searchable index that can be used for coding and Q&A. It serves the same purpose as this tool except I guess it is more standard if you even call it that at this stage.

I think this is a good idea in general, but perhaps a bit too simple. It looks like this only works for static sites, right? It then performs a JS fetch to pull in the html code and then converts it (in a quick and dirty manner) to markdown.

I know this is pointing to the GH repo, but I’d love to know more about why the author chose to build it this way. I suspect it keeps costs low/free. But why CF workers? How much processing can you get done for free here?

I’m not sure how you could do much more in a CF worker, but this might be too simple to be useful on many sites.

Example: I had to pull in a docs site that was built for a project I’m working on. We wanted an LLM to be able to use the docs in their responses. However, the site was based on VitePress. I didn’t have access to the source markdown files, so I wrote an MCP fetcher that uses a dockerized headless chrome instance to load the page. I then pull the innerHTML directly from the processed DOM. It’s probably overkill, but an example of when this tool might not work.

But — if you have a static site, this tool could be a very simple way to configure MCP access. It’s a nice idea!


The simplicity is a feature. I avoided headless Chrome because standard fetch tools (and raw DOM dumps) pollute the context with navbars and scripts, wasting tokens. This parser converts to clean Markdown for maximum density.

Also, by treating this as an MCP Resource rather than a Tool, the docs are pinned permanently instead of relying on the model to "decide" to fetch them.

Cloudflare Workers handle this perfectly for free (100k reqs/day) without the overhead of managing a dockerized browser instance.


I like the idea of exposing this as a resource. That’s a good idea so you don’t have to wait for a tool call. Is using a resource faster though? Doesn’t the LLM still have to make a request to the MCP server in both cases? Is the idea being that because it is pinned a priori, you’ve already retrieved and processed the HTML, so the response will be faster?

But I do think the lack of a JavaScript loader will be a problem for many sites. In my case, I still run the innerHTML through a Markdown converter to get rid of the extra cruft. You’re right that this helps a lot. Even better if you can choose which #id element to load. Wikipedia has a lot of extra info that surrounds the main article that even with MD conversion adds extra fluff. But without the JS loading, you’re still going to not be able to process a lot of sites in the wild.

Now, I would personally argue that’s an issue with those sites. I’m not a big fan of dynamic JS loaded pages. Sadly, I think that that ship has sailed…


I think the idea here is that the web_fetch is restricted to the target site. I might want to include my documentation in an MCP server (from docs.example.com), but that doesn’t mean I want the full web available.

I think deprecation in intra-company code is a completely different beast. You either have a business case for the code or not. And if something is deprecated and a downstream project needs it, it should probably have the budget to support it (or code around the deprecation).

In many ways, the decision is easier because it should be based on a business use case or budget reason.


The business case is the easy part, the quagmire is in getting the different teams to agree who should support the business case, why it's more important than the business cases they wanted to spend cycles on instead, and how much of the pie supporting it takes on the budget side. Less so when the place is small enough everyone knows everyone's name, more so when it's large enough they really don't care what your business case is much even though it'd be 10x easier to support from their side instead of another.

Oh. But that is a solved problem. The users of the library just copy the code from before the deprecation and then stick it in their codebase not to be maintained anymore. Problem solved. /s

> Mickey Mouse Clubhouse's lazy CG animation and unimaginative storytelling

I think it’s important to remember that you probably aren’t their target audience. Their audience expects to see simple characters with simple stories. The CG doesn’t need to be advanced, so having it fast to produce is the goal. It has to hold the interest of a toddler for 25 min without annoying the parents too much. Shiny and simple rendering is probably what they are going for. You can certainly argue about the educational qualities of the show, but I think entertaining was their primary goal for Mickey Mouse Clubhouse.

Also, this show hasn’t been made for years, has it? You’re looking at a show that was produced from 2006-2016. The oldest shows would be almost 20 year old CG. The newest is still nearly 10 years old. At the time it was fresh, the CG was pretty good, compared to similar kids shows.

My kids were young right in this window, and we watched a lot of Disney.

Disney definitely hit a CG valley though that you can see with some of their shows that switched from a 2D look to a more 3D rendering. Thankfully we aged out of those shows around 2015, so it has been a while. Disney has always been a content shop where quantity has its own quality, so I’m sure I’d have similar opinions as you if I was looking at the shows now. But at the time, it wasn’t bad.

I’m not sure how the OpenAI integration will work. I can see all sorts of red flags here.


They brought it back this year as Mickey Mouse Clubhouse+. Same vibe, the animation is more polished but still simplistic.

I think y'all are thinking about this wrong.

Right now the deal is structured as Disney pays OpenAI. That's going to invert.

Once OpenAI pays Disney $3B/yr for Elsa, Disney is going to go to Google and say, "Gee, it sure would suck if you lost all your Disney content." Google will have to pay $5B/yr for Star Wars. And then TikTok, and then Meta... door to door licensing.

Nintendo, Marvel, all of the IP giants will start licensing their IPs to platforms.

This has never happened before, but we're at a significant and unprecedented changing of the tides.

IP holders weren't able to do this before because content creation was hard and the distribution channels were 1% creation, 99% distribution. One guy would make a fan animation and his output was a single 5 minute video once every other month. Now everyone has exposure to creation.

Now that the creation/consumption funnel inverts or becomes combined, the IP holders can charge a shit ton of money to these platforms. Everyone is a creator, and IP enablement is a massive money making opportunity.

In five years, Disney, Warner, and Nintendo will be making absolute bank on YouTube, TikTok, Meta platforms, Sora, etc.

They'll threaten to pull IP just like sports and linear TV channels did to cable back in the day.

This will look a lot like cable.

Also: the RIAA is doing exactly this with Suno and Udio. They've got them in a stranglehold and are forcing them to remove models that don't feature RIAA artists. And they'll charge a premium for you to use Taylor Swift®.

Anyone can make generic AI cats or bigfoot - it's pretty bland and doesn't speak to people. But everyone wants to make Storm Troopers and Elsa and Pikachu. Not only do teenagers willfully immerse themselves in IP, but they're far more likely to consume well-known IP than original content. Creators will target IP over OC. We already know this. We have decades of data at this point that mass audiences want mass media franchises.

The "normies" will eat this up and add fuel to the fire.

Disney revenues are $90B a year. I would not be surprised if they could pull a brand new $30B a year off of social media IP licensing alone. Same for Nintendo and the rest of the big media brands. (WBD has a lot more value than they're priced at.)

This is the end game. Do you see it now?


>Now that the creation/consumption funnel inverts or becomes combined, the IP holders can charge a shit ton of money to these platforms. Everyone is a creator, and IP enablement is a massive money making opportunity.

This would be worrying if the content was 1) actually good or 2) not freely available. Trying to charge premiums for slop never works. Just ask McDonald's 2-3 years back. The damage to the Star Wars brand shows this isn't a long term strategy.

The 2nd issue on animation slop is the human element. We already made it very cheap for people to make content. No amount of Mickey or Star wars is gonna undo the fact that people like looking at other people. Animation slop will find its audience, but it's not gonna overthrow TikTok with real(ish) people making people slop.

If Disney tries to pull out of Google, they will double down on Shorts. This won't work on most companies. It's a best a nice hook into Disney+.


> This would be worrying if the content was 1) actually good or 2) not freely available.

The content is not freely available. You pay for it with ads or premium subscriptions. There is a massive amount of money being passed around behind the scenes.

When IP holders cut off Google's ability to host IP content, 50+% of YouTube immediately dies overnight.

Looking at the top videos on YouTube this week, 7 of the top 10 are all "Pop IP" content: Candy Crush the Movie, Miley Cyrus, "I wanna Channing All Over Your Tatum", Superman Drawn, Star Wars Elevator Prank, We are World of Warcraft, Red Bull.

People love and drown themselves in pop culture and corporate-owned IP. Whether that's music, games, anime - they love corporate-owned IP.

If this content gets pulled en masse, YouTube is fucked. YouTube has been getting all of this for free. That's something that could be done today, but it's just non-obvious. When you package that with the "creation enablement", it's a packaged good that can be licensed or sold enterprise-to-enterprise.

Disney is about to wet their toes. Nintendo has already been experimenting with it. The concept is right there in front of them, and as distribution channels and content creation merge into one uniform thing - it'll be obvious.

> The damage to the Star Wars brand shows this isn't a long term strategy.

To be clear, this was made by some of the top humans in their field. And despite massive critical panning, it did print money for Disney (perhaps at the cost of long term engagement/interest).

> The 2nd issue on animation slop is the human element.

It's the difficulty, cost, time, talent element.

People consume more human content because more human content is created. Orders of magnitude more. It's easy.

Vivienne Medrano, Glitch Productions, Jaiden Dittfach, and many others have minted huge franchises on YouTube - views, merch, Amazon/Netflix deals, etc. The problem is that it takes them ages to animate each episode, whereas filming yourself on your smartphone is quick, easy, accessible, affordable, low-effort, low-material, and low-personnel.

Kids on twitch are watching each other become anime girls and furries with VTuber tech. They're willingly becoming those things and building fantasy worlds bigger than their public face identities. We just haven't had the technology to enable it at a wide scale yet.

This is all changing.


>The content is not freely available. You pay for it with ads or premium subscriptions.

Okay, free with ads is "free" to consumers. That will get swamped by tiktok. Subscription is premium. People won't pay for slop. Those are both covered.

>There is a massive amount of money being passed around behind the scenes.

Yes. But who's making a profit? You can only shuffle money for so long, and we're hitting the breaking point of that. Ads won't invest into platforms they suspect are filled with bots and don't give ROI. Companies won't invest once saying "AI" isn't a get rich quick scheme. Customers won't invest once they run out of money.

It works, until it doesn't. Then it's suddenly freefall and people will act like they didn't hear creaking for 5- 10 years.

>When IP holders cut off Google's ability to host IP content, 50+% of YouTube immediately dies overnight.

YouTube isn't really known for "IP content". That debate ended in 2010 with Viacom. They in fact rampantly remove traces of IP content.

Meanwhile, they have a monopoly on video hosting and control payouts in an opaque way to millions of non-IP creators. unless you think it's the end of premium media as we know it, Disney is still going to host trailers on YouTube and Vevo will host music videos. There's no reason to go anywhere. Disney+ and YouTube can exist simultaneously.

>To be clear, this was made by some of the top humans in their field. And despite massive critical panning, it did print money for Disney (perhaps at the cost of long term engagement/interest).

Yeah, in complete agreement. Short term monies, long term damage. Media has a "lingering effect" where results on the prequel will pass into the sequel and vice versa. So you can still have a profitable but panned release simply because previous movie was that well received.

>It's the difficulty, cost, time, talent element. People consume more human content because more human content is created. Orders of magnitude more. It's easy.

Do you think that if we had the same amount of animation as we did live action content that they'd be consumed equally? I'm a huge animation fan and very skeptical.

Consider this phenomenon

https://erdavis.com/2021/06/14/do-women-who-pose-with-their-...

Even in art spaces, people will engage more with the presence of a human face. Females more, but even males get a noticeable boost You can chalk it up to lust or familiarity or anything else, but there seems there's some deeper issue at work than simply "there's more live action slop for now".

If we do get more animation slop, I think it will veer a lot more towards hyperrealism instead, for similar reasons. I always see it as uncanny, but it doesn't seem to hinder as much on others. It'll just be trying to mimic live action at the end of the day.

>Kids on twitch are watching each other become anime girls and furries with VTuber tech. They're willingly becoming those things and building fantasy worlds bigger than their public face identities. We just haven't had the technology to enable it at a wide scale yet.

Sure. Animation is more engaging with kids. Kids aren't profitable, though. Their parents are. Unless its with ads, but advertising targeting kids has so much red tape.

I dont see a profitable model out of a media empire focusing on kids. Even Nintendo gets a lot of its money off of merchandising despite selling premium games with rare sales.


For what it's worth, I don't write MCP servers that are shell scripts. I have ones that are http servers that load data from a database. It's nothing really all that more exciting than a REST API with an MCP front end thrown on top.

Many people only use local MCP resources, which is fine... it provides access to your specific environment.

For me however, it's been great to be able to have a remote MCP HTTP server that responds to requests from more than just me. Or to make the entire chat server (with pre-configured remote MCP servers) accessible to a wider (company internal) audience.


Honest question, Claude can understand and call REST APIs with docs, what is the added value? Why should anyone wrap a REST API with another layer? What does it unlock?

I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.

I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.

Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.

The short answer: not everyone is using Claude locally. There are different requirements for hosted services.

(Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)


Gatekeeping (in a good way) and security. I use Claude Code in the way you described but I also understand why you wouldn’t want Claude to have this level of access in production.

Ironically models are sometimes more apt at calling REST or web APIs in general because that is a huge part of their training data.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: