I saw a thread on Twitter the other day where some developers were dissing the w3schools website. There are apparently browser plugins that block the site? I don't know why they don't like it, if given a choice to point to this page or this one, I'll generally pick the one on w3schools, because there's a chance that people who don't know Node will understand it, and might learn something, and learning imho is a universal good.
Similarly, I appreciate it when traveling if people don't make fun of the fact that I don't know where everything is in their hometown, and try to return the favor when people need help finding their way around my hometown. If I know a little bit of their language I try to throw it in -- grazie! prego! buon giorno!
I like w3schools because they tend to show you the info you need in the order you need it. Other developer docs more often show you stuff in the wrong order, and leave out details that are necessary to understanding the topic. They may work well for experienced programmers, but what's so bad about making what we do more accessible to the non-initiated?
TL;DR -- there are significant problems running River5 on Glitch.
Yesterday I posted a link to a River5 server running on Glitch, the result of a braintrust query earlier in the day. This was significant because Glitch is easy to get started with for people new to running servers, a good thing, and it's free. Seeing it run River5 was great. Alas, when I came back an hour later, the server had lost its memory of previous stories and had started over. You can see this by watching the dashboard page on the server.
I found a doc that explains its technical limits, notably:
Projects sleep after 5 minutes if they are not used, and those running for more than 12 hours are stopped. Both wake again when they receive a HTTP request.
This is similar to what happens on Heroku with free projects. So I tried what had worked for Heroku, I wrote a script that runs on my desktop that reads a fast page on the server once a minute. It should, according to their warning, keep the server running.
River5 maintains the data about the feeds its following and the stories it has seen in the local filesystem. That gets recreated when the server is shut down and then restarted. So, even with a keep-alive script, it will lose its memory after 12 hours.
However this paragraph seems to contradict that conclusion --
Projects have a limit of 128MB of space on the container. Though things written to '/tmp' don't count towards that, nor do your Node modules, and we use compression to squeeze the most out of that space. Plus, there's an additional 512MB of assets storage space too.
I'm guessing they have an API for this? Not sure. River5 just keeps JSON files in the filesystem. It uses the Node fs package to read and write.
James Comey is a lawyer and bureaucrat.
He doesn't have that much to say.
He was spectacularly wrong about something really important, and doesn't know it.
And he is no Michael Wolff, a muck-raker and rabble-rouser, by profession.
If you want an idea of why no one told you what Facebook was up to, look no further than the press. It was their job to tell you, after the tech companies.
Here's the lead paragraph of a news story written by John Markoff in the NY Times on this day in 2015.
That was and probably still is the way the press views the tech industry. Until they get over it, don't expect much reality from them re tech.
A new TV show format. Tours of neighborhoods in various parts of the US. Show people in different parts how we live, and vice versa.
Walk through a typical supermarket and show what you can buy and what the prices are.
The nearest airport.
An average commute.
See it as a person living there would see it.
Confront perceptions with reality.
Reality TV that is real reality.
I found a feedBase problem, an interaction with the new checkboxes, de-duping and dereferencing feed URLs. It would manifest this way: Click a checkbox for a feed, reload the page, the feed is unchecked. But only for a few feeds. For most feeds it worked as it should (that's why I didn't catch the problem the first time around).
The common denominator -- the feeds were one of the de-duped feeds on the hotlist. The solution is to be careful with the de-duping map, to always map to the one that's preferred by the server, because we deref the URL before subscribing. We weren't doing that for a few of the de-duped feeds. The problem may come up with future mappings and I want to be sure we don't have to repeat the debugging process.
Another thing -- when dereferencing a URL, if the only difference is the protocol, don't use the deref, stick with what you have.
I'm beginning to realize that we need feeds to have a guid, to take all the guesswork out of this. It's a real mess! Once you try to maintain a database of feeds, something I've not actually done myself before, you buy into trying to come up with a canonical ID for a feed. The URL works pretty well, until you realize that there are several different URLs for each feed.
Also realizing we should have popped the protocol off the URL before using it as a key so http://xxx would be the same feed as https://xxx.
Little-known fact: I designed and developed a programming language.
My goal was to create an environment I would work in for the rest of my career. I just realized it's exactly 30 years later, and I'm still using it.
30 fucking years. I think I earned the right to say it that way. 🚀
Where would I start? db.c of course. 💥
PS: Most people don't know about Frontier. But you probably do know about things that were developed in Frontier. Like the first blogs, podcasts, RSS feeds, readers and content creation tools, XML-RPC and lots of other good shit. People would ask me how I got so much done. "Great tools." That's Frontier.
Imagine a world without phones.
In a world without phones, you could listen to people with beautiful voices speak words designed by psychologists to make you want to buy tacos or life insurance.
But you couldn't listen to your daughter or son.
Blogging lets us write for each other.
I'm thinking about getting a new iPad, and said so on Twitter. I got a bunch of responses, including this blog post from Matt Ballantine, who loves the iPad because of its compatibility with Apple's pencil. Based on his report, I decided to get the new iPad and the pencil. I used to be a diagram person, as part of pitching ideas to other people, I'd develop what I called a chalk talk. A very good way to communicate, highly personal and persuasive.
Ariel Anbar posted a caveat about the pencil on Facebook.
Hmmm. That's too bad. I wondered why Apple didn't promote the product more, maybe this is why. Even so, I think I'll give it a try.
I don't know what the problem is, I had no problems installing it on my Mac or on a Linux server.
When I have trouble with NPM, this is what I do:
What's unusual about iconv is that it's written in C, and as part of the npm install process it needs to compile it to machine code.
I am anything but an expert in NPM problems, that's why I'm raising a flag here on Scripting News.
Update: I think Anton got to the bottom of it. Some systems have it set up so that you can't run downloaded stuff without modifying permissions.
As you know from reading this blog, I am a big fan of efforts to make the web long-lived. And that's why I was interested in this story about how the NYT is creating new archives of old stories so they appear on the web exactly as they did when the stories ran.
For example, here's an archive of the NYT home page for 9/11/2001. Interestingly, I took a screen shot of that page earlier in the day, as the story was unfolding. There was no question history was happening that day.
It's good, but who is going to do this for historic weblogs? I've kept my blog around, and various experiments I've done over the years. I have generally tried to use technology that I believed was going to stick around, so I've never built on Flash, for example. I've used static HTML files as much as possible. But even so, there are quite a few gaps in my archive, esp where I have let domains lapse.
And of course a huge bonfire of breakage is coming as Google tries to turn off HTTP. This is something users of the web, news orgs, libraries, historians, researchers, should join me in condemning. Changing to a new protocol is fine if you want to do it, but trying to force people to? That's a company that needs to be told to stay in its lane. More on this on the Google and HTTP faq.
I want a future-of-news conference where we plan new open systems for news publishing and reading, without sponsorship of big tech companies such as Facebook or Google.
There would be a session at the conference entitled How To Get Facebook and Google to Give Us Money, and it would be off-topic at every other session. That way the sessions wouldn't all be repetitive expressions of powerlessness and we could get some work done.
People who were at BloggerCon will recognize this as the How To Make Money With Your Blog session at that conference. We swept up all the powerlessness into one session, and made it off-topic at every other one. It worked.
Artists when they get together, all they want to talk about is how they need to make money. I've never seen a discussion among creative people that didn't immediately devolve to this. In F-O-N, big tech companies encourage this. They want all the attention focused on them.
It felt like it was going to be a small change, but it wasn't.
That said, there are still references to fargo.io in the code. That'll take more time to shake out because this code is shared with my other projects. But the main files for displaying rivers are now in jsdelivr.
If you want to see all the code that's being accessed through the CDN, it's in the River5 repository on GitHub.