Click here to show or hide the menubar.
  • Can the people be trusted to rate news sources?

    What choice do we have? I know experts think they have the means that we don't, but they accept a set of premises about what news should be, that lead us to news that

    • Accepts that the US had to go to war with Iraq, and doesn't question the assumption that there were WMDs and probably nukes in their arsenal.
    • Shows endless clips of Trump rallies as if that was news.
    • Asks ridiculous uninformed questions about Hillary Clinton's emails.
    • States conventional wisdom among reporters as fact.
    • Assumes the Trump story is pretty much Watergate, when it's clearly not.
    • Where reporters are all trying to get a prize or a raise, trying to catch politicians in gotchas, and filtering out all the politicians who just want to get shit done.
    • Endlessly analyzes what today's events mean for the 2020 election, when the answer is not one thing.

    The news orgs that report this way would be rated highest by the experts, no doubt. But we need news to be better. It's the one thing I agree about with Trump supporters. The standard "trusted" news attaches to ideas that they won't let go of. And reports on non-news endlessly. Can we rate that kind of stuff way way down?

    We need a lot from news that they aren't giving us, because they do everything they can to not listen to the users. If Facebook really wants to do this, don't listen to the experts on this, they answer the wrong questions, imho. Find a way for the users to decide. I know they don't trust us, but we're all we got.

  • Here's a theoretical question with practical implications.

    In Node.js, is there a way to do interprocess communication between Node apps? I could set it up so both apps have an HTTP server, and the apps could communicate using XML-RPC, so at that level I know it's possible, but I'm wondering about a different, higher-performance, approach.

    Suppose I launch two apps from my app. I would do it exactly the way the forever utility does it. Is there some way for an app to call back to forever, and is there a way for forever to call into the app?

    That way you could have all kinds of external interfaces abstracted.

    All of a sudden you could build a high-level OS for Node. A way for Node apps to share a data space.

    This is what we tried to do on the Mac with Frontier. We never convinced Apple to stay out of the way so we could do it. But in Node, with its open source culture, if there was a way for forever to do it, that means there's a way for you and I to too, because of course forever is open source. ;-)

    Badaboom. 💥

    PS: Of course I may be missing something obvious, a way to do this that I spaced out about. That does happen from time to time.

  • I was just talking with a friend, two ideas --

    • skicutter -- like Wirecutter but for ski areas. Where's the best skiing right now.
    • weedcutter -- same thing for weed.

    For extra credit, cross-tabulate. 💥

  • Journalists still think they were/are bloggers. Reporters using blogging tools is not blogging. Bloggers are reporters' sources. They say blogging is over because their professional CMSes caught up? They're in a bubble. A profession that should be good at listening only actually listens to itself.
  • There was an outage on scripting.com earlier today. I let a domain expire, thinking it wasn't in use. Haha. Okay. I renewed it. It seems to be resolving here, so the outage should be over. Sorry. I'll be more careful in the future. 🚀
  • With Google Reader shutting down and Facebook pulling out of news, and now HuffPost withdrawing, I feel great. Vindicated. Optimistic once again.

    There is no magic to platforms. Corporate platforms always end up as puddles. Little wrecked ecosystems that started with great bluster.

    The only platforms worth developing for are ones without a platform vendor. That is, open platforms based on open formats and protocols.

    I was asked why Google Reader is on my list.

    • They didn't support all of RSS, so blogging became limited to what Google Reader understood. And then they just threw it all out, like a massive oil spill, and did nothing to clean it up. In the end it would have been better if it never existed.
  • Update on the Feeds for Journalists project. "The current list doesn't have much meta-news or news-about-news. It's mostly just plain news. I am totally in favor of adding Canadian feeds, but for news orgs that are producing news about Canada, and news from a Canadian perspective."
  • I tried writing at Huffington Post, many years ago, hoping to get more flow. When I finally got a hot story on HP, here's what they did.

    • Rewrote it.
    • Redirected traffic from my page to theirs.

    That's when the great experiment ended. 💥

  • I support the NYT turning over its editorial page to Trump voters. But we keep hearing from them. How about Clinton voters next. And black voters. And people who didn't vote. And so on. Let's hear more from people outside the elite bubble.
  • The Feeds for Journalists OPML file is now available. You can use this file to subsribe to all the feeds in any feed reader. It will be updated periodically, so check back. Even better if your reader allows you to subscribe to OPML files, a drum I've been beating for a long time. Then you'll get the updates automatically.
  • I read this Politico piece about this history of Trump and Haiti.

    Initially there was some bad data about a possible connection betw Haitians and AIDS that soon turned out to be false. But Trump kept bringing it up, and the stink on Haiti wouldn't go away.

    Trump is still putting the stink on Haiti.

    Reading this reminds me of the stink that people in tech put on RSS. There never was anything wrong with RSS, no data behind any of the things that were said, but people, some who even thought we were friends, said some very ugly Trump-like things about RSS. (Actually even worse.)

    That's the sad thing about Trump, not just that he is such a flawed awful human being who is our president, but that if you live long enough, you've met plenty of other people who take exactly those kinds of shortcuts just to hurt other people.

  • Piero Macchioni, an Italian journalist, on Feeds for Journalists.
  • New this.how doc on Black Lives Matter.
  • Just realized, the reason librarians must love the web, and linking, is that you can provide a complete bit of complex information without being overwhelming. It's the same reason I like coding in an outliner. There's no cost for being verbose, just tuck the verbosity under a headline and leave it collapsed. Until the day you wonder wtf is going on here. You can hide little crumb trails for later discovery. Links work the same way.
  • The thread continues.

    I believe I have found the least disruptive way to fix the file-read synchronization problem.

    Here's a gist containing a new local routine that reads an XML feed.

    Note that we save processing of new items for the end, and don't do any processing until the feed river is in the cache.

    Update: I have the changes implemented locally, testing.

  • In putting together the Feeds for Journalists project, I had to figure out some new stuff about open source, because I had never seen the idea applied previously to a list of feeds. I haven't even seen it used for docs, novels or news, written work, but I'm sure it has been.

    I've been shipping open source stuff mostly under the ultra-liberal MIT license.

    I've also been using lots of open source stuff in my JavaScript work. It's why I switched my development to JavaScript a few years ago. When I need to use a relatively new technology, there always is a package that supports it. Debugged, maintained, and complete. It's like developer heaven. Not only is it all there, but it's not locked up inside a huge Silicon Valley company. But things I depend on still get deprecated. I try to find projects that don't do that so much.

    So when I publish something via open source, what does that mean?

    • I work alone. The projects I publish are my code. I am responsible for every aspect of it. I try not to hack stuff in. And people who don't work on the code regularly can only hack stuff in (unless their brains are empty or they're some kind of prodigy, I've heard they exist, but have never met one). So I don't accept pull requests. I prefer clearly written feature requests.
    • I know my code has quirks. I use an outliner to write it, for one thing. You're seeing the generated code. That's another reason why pull requests don't work. And because I use an outliner, I edit structures of code, and nesting doesn't have any impact on readability or maintainability. But everyone's code is quirky. Reading other people's code is like opening their refrigerator. ;-)
    • Almost all my packages are named dave-something. That's because the straightforward names were already taken. I'm a relative latecomer to the package world in JavaScript. So there's daveutils, davefilesystem, davehttp, daverss, daveopml, davetwitter, davereader. There are exceptions like oldschoolblog. Just because I fell in love with the name and it was available. I've been doing modules like this since UCSD Pascal days. Back then I called them "czars" so there was screenczar and keyboardczar etc. We were dealing with lower level concepts back then.
    • When I find problems in other people's packages, and I do, I write up bug reports exactly as if I didn't have access to the source code. I try to stay within the three part framework -- 1. What I was doing. 2. What I expected to happen. 3. What actually happened. I have found it off-putting when the project owner asks me instead to submit a pull request. I don't have the bandwidth to learn how your codebase works internally.

    For the Feeds for Journalists project, I own the list. You are encouraged to make feature requests, in the form of URLs of feeds you think should be on the list, or to question the inclusion of any feed I've put there. I'm totally open to discussion (with the usual caveat as long as it's respectful).

    But first, before proposing an idea, think about what the project is trying to create -- a collection of feeds that's likely to cover breaking news from a number of angles with forays into science, the arts, education, humor. I included a feed about torrents (because it's good, and they have many of the same values as journalists and I think it would be useful for you all to get to know each other).

    1. Suggest feeds and 2. Tell me why you suggested it. Ultimately I'm going to decide if it goes in this collection. And because there's a liberal open source license, if you see another direction to take it, for a subset of journalists perhaps, or librarians, or Italian journalists, you can fork it and use it as the basis for your own list.

    PS: I think this piece will become a this.how doc, like the one about standards, which also began as a blog post.

  • BTW, the River5 discussion continues with Carsten.

    He points out that the new method I proposed for adding items to rivers not only is more complex than the current method, and therefore more difficult to maintain, something I totally concur with, it still has a synchronization problem. Copying a pointer and deleting an object can't be an atomic operation. it's still possible something will be added to a queue betw the two steps. And that would result in a lost item.

    We're now somewhat in the weeds, possibly, but we all agree it's better to have an approach that loses zero items, than one that maybe loses one item on (possibly) rare occasions. So I have proposed yet another approach in a comment. This one has the advantage of retaining the current simplicity and hding a bell/whistle that didn't need to be there in the first place.

  • I'm guessing what Facebook saw in numbers is what I feel as a user. It's drying up.

    The most interesting part of Facebook is the On This a Day In feature, and even that is starting to scare me as we relive 2016 and 2017.

    It's very quiet on Facebook these days. And to the extent it's not quiet it's profoundly depressing.

    I don't feel it's too hyper to say Facebook is dying.

    Not sure there was anything they could have done to prevent it, but a dramatic U-turn away from news says, to me, they see it too now.

  • If you're a journalist and you love RSS, please join me in an easy project to improve both. Let's put together a list of starter feeds for journalists.

    I've kicked it off with a collection of news feeds that I know provide good value. If you have favorites, please suggest a few in a comment in this thread.

    In order for this to work it has to be done primarily by journalists. I'm happy to help any way I can.

    I started this project because I am sure that unless news thrives on the net we are totally screwed. I've never felt that we could trust Facebook to be the official distribution system for journalism on the net.

    This is the first step to creating many distribution nets, so a competitive market can develop. I've bootstrapped successful tech projects before. This is how it begins! It's not that hard, it just requires cooperation and a clear goal.

  • Can you imagine what would have happened if the Hawaii message had happened in NYC or DC. The panic would have been unreal. People would have died. And the odds of a retaliatory strike would have been there too. This is how wars start, btw.
  • Yesterday Mathew Ingram, a longtime friend and professional journalist, put out a call for feeds for a reboot of his use of RSS.

    This got me thinking. What if a community created such a list of feeds, and did it over a period of weeks or months, with discussion, and a certain amount of deliberation.

    We could use the tools of open source to do this project.

    So, I've set up a new GitHub repository where we can work on that list of feeds. I'll write a small piece of software that periodically turns that collection into an OPML file suitable for use in a feed reader. From there who knows what happens, but just getting a list of feeds for journalists to follow, collaboratively, while it doesn't involve much work or technical know-how, would be a major improvement over the way we all do this individually, for ourselves.

    I'll post updates on this project to this blog.

  • Following up on yesterday's report on River5's file reading problem at startup, with futher thought I realized I did not have a solution to the problem.

    The way I proposed doing it yesterday would have resulted in just as many lost items at startup. The problem was that the central routine was sending the JSON text of the file to each of the callbacks. Each would then parse the text, producing a structure which it would then link into the cache. Only one of the structures would survive in the cache, the last one linked in, and it would have one of the new items. The other new items would be lost. In other words, no improvement.

    So I changed the code and had the central routine parse the text, and call each of the callbacks with the resulting structure. Now all the callbacks add their items to the same struct, (unless I'm still missing something) and the result is zero lost items.

    I've created a gist with the new code, and left the old gist in place. I have not yet released a version of River5 that uses this new approach. Testing it here first then thinking about how I want to deploy.

    Note this version is more complex because it has to initialize the struct once and only once, so the central routne, readRiverFile, must receive a callback that initializes the structure when the read fails, which it will do when the river file is first created.

    I haven't received any comments, but they are still welcome.

  • The other night Julia Ioffe said something wise on one of the shows: Almost everyone who immigrates comes from a shithole. Immigrating is no fun. It has to be worth it. People from Norway don't want to leave because it's not a shithole.
  • Maybe the thing to do is to start a group of journalists who love and understand RSS and want to use it in new ways to make their journalism better.
  • I wonder sometimes what goes thru people's minds when you offer to help and it's something you're expert in, and they ignore you.

    It's been happening with news people constantly since I stared working on news software and formats on the web.

    I can't imagine what ulterior motive they think I have. I don't make any money from the work. I do it because I am sure that unless news thrives on the net we are totally screwed.

    Don't they see that too?

    I'm trying to think but nothing happens!

  • I've now had a chance to study the problem reported with River5 a few days ago.

    The first part of solving it was writing down concisely what the problem was. Carsten Senger did a great job, but he isn't responsible for the fix, I am. And I wrote the code and am familiar with how it's organized and how it got to be how it is.

    The problem statement

    • There are two kinds of rivers, ones associated with a list, and ones associated with a feed. The problem applies to both kinds of rivers, but is more likely to show up in the feed-based ones.
    • When a new item comes in, it is added to the rivers of all the lists it's in, and in the river for the feed it came from. The river files are stored in files on disk. We cache them in memory. When we want to add an item to a river, we first check if it's in memory. If is, we add the item and we're done. If it's not in memory, we read it from disk, and then add the item to the river. This is where we run into trouble.
    • The trouble is that there might be two or more new items from one feed for one river. The first item gets added okay. But when we try to add the second item, since reading the file takes so long, we will find it's not in the cache, so we start a second read. We add our item, but the first item probably isn't in the copy we loaded. It would be an amazing coincidence if it was. So no matter what, we just lost one of the new items from the river. If there are N new items in the first read, we will lose N-1 of them.

    The solution

    • The best solution is this -- create a queue for each river when the first read is initiated. Add its callback as the first and at that time only item in the queue. If a new read comes in while we're still reading the first one, add its callback to the queue. Once the file is read, call all the callbacks in the queue, concurrently, and delete the queue for that file.
    • I also considered doing it brute force, simply reading all the rivers at startup before doing any feed reads. But I wanted to write the code. And when I did I was glad, it's really interesting how well JavaScript handles this kind of gymnastics. I laughed out loud a few times while putting it together. Code that makes you laugh is worth writing imho. 💥

    Code review

    • I put the queue code into a Gist for review. If you spot any problems, post a comment there. Thanks.
  • President Shithole goes for his physical.
  • It's really weird, the date on the piece is wrong. It was December 27, not December 15. You can see that in the copy, it talks about being between Christmas and New Years. And mentions a piece that wasn't written until December 23. And I actually remember that this was on the 27th. Somehow, at some point, my CMS screwed this up. Which is weird because this is a static file. Looks like it was rebuilt some time in 2004. In any case if anyone wonders, the date is incorrect.
  • Ooops. We missed the 20th anniversary of RSS. The format that became RSS was rolled out in December 1997. Here's the piece where that was announced. I guess we missed the party. Open formats don't have PR firms. 💥
  • XML-RPC started in 1998, which means it's about to be 20 years old. I think this is the first post about it. Not very specific. We were already working on it, but we hadn't yet hooked up with the people at Microsoft. From a quick scan it looks like the actual protocol we standardized on didn't come out publicly until June. Pretty sure we had something working internally at UserLand in March or April.
  • An unusually long podcast about Occam, Wolff, war, medicine, programming, debugging, hacking, Russia, war again, Pearl Harbor, Hiroshima, Nagasaki, Watergate, Buzzfeed News, Ben Smith, the dossier, ladders, the elite, working together, and the day it hits us that this is not Watergate, will be another day like November 8, 2016. The gatekeepers, the elite, don't want to give up their positions on the ladder, so ideas that threaten that can't get through. Instead we have to be systematic about letting ideas in. There's a lot of tough love in here, but it's important.
  • I've been pushing the idea of Occam's News, where we talk about what's obvious not what we can prove. Michael Wolff's approach is exactly that. It's not what you can prove, but it's what we know anyway. Both this and proof-based news are valid and needed.

    Wars are fought with Occam's spy info, and guesswork about what the enemy is doing, and trying to figure out what's a decoy and what's real.

    Also medicine. Sometimes they don't know what disease you have and they just start treating the one they think you might have and see if it works.

    Programming, what I do, is most definitely not Occam-like, it's proof-based. But debugging is very much an Occam art.

  • I watched MTP Daily yesterday. For a few minutes, and then went back to work. It's an awful awful show. The worst of the worst.

    I hate the show because Chuck Todd only talks about the horse race. I swear, the day after the 2016 election he was already talking about how people were "positioned" for 2020. This kind of analysis never means anything. Go back and listen to the talk about the 2016 election in 2015 for a clue.

    And they don't even think about elections in a realistic way. Yesterday they were talking about how the Dems failed to sell competence last time (Hillary), so they probably shouldn't try that again. I wonder if they listen to themselves. There was a time, believe it or not, when both parties nominated people who were fairly competent. Even Ronald Reagan, who people thought was a joke, had served as governor of California before becoming president.

    Anyway, assuming competence is an attribute like hair color or gender, height or whatever, the next election is exactly the time to be selling competence. Why? Because the electorate flip flops. We always elect the opposite of what we elected last time.

    For example, we elected Trump to follow Obama.

    • Obama is black and Trump is a racist.
    • Trump throws tantrums and Obama's nickname is "No Drama."
    • Trump is a complete idiot, drooling at the mouth, and Obama has a law degree, is a professor, a total technocrat who probably aced every test he took. Trump probably bought his grades with money or blackmail (probably blackmail).

    Extrapolating, the next president will probably be a woman, obviously -- but bland and reliable, not too old, known for listening and studious, even pious, and not rich. And not a celebrity.

    Although I don't know much about her, I would take a good look at Amy Klobuchar from Minnesota. She's intelligent, passionate, confident, speaks well, has a sense of humor, is well-educated, young but not too young, thoughtful, and has the right values to start to undo the damage done by Repubs during the reign of Trump.

    Why not Kirsten Gillibrand? She has many of the same qualities as Klobuchar, but she's from New York. I come from NY too, but I don't think our president should. NY is our largest city, but it's actually a pretty small place. Trump stood out in NY, but we're seeing how that doesn't work globally. But even if it's great to have a president from NY, remember we flip-flop, and I'd say the odds of two consecutive presidents from NY is pretty slim.

    Anyway, as you can see, there are some interesting things to think about for 2020, even though it's so far away. Of course they discussed none of this on MTP Daily yesterday.

    PS: You want a courageous Democratic ticket? Klobuchar for president with Keith Ellison as VP. Unlike most Democrats these two can complete a sentence without sounding like an idiot. Both from Minnesota, btw, but look at how different they are. They say to white men who vote Trump, fuck you -- you had your chance, this is the way things look now. Get a pair, growth the fuck up and let's really start winning.

  • They announced something.

    What this all means, I have no freaking clue.

    Since the Algorithm is proprietary, I don't know what it did before that was so different. I gather they're reneging on their deal with professional journalism?

    I always thought friends had huge influence over what I see in the timeline.

    And won't Putin still be able to buy ads to fuel the virality of his mischief?

  • I was out walking in the morning rush hour in Manhattan, everyone looks so nice. I wondered, since #metoo has the world been treating attractive women better? Has cat-calling diminished? Leery looks? Inappropriate comments?
  • What if two networks, say Netflix and Amazon, did a deal. They would both do 1-hour-weekly dramas, one the reboot of The West Wing, and the other a Republican version. Find a prominent Democrat-leaning celebrity to be POTUS on the Democratic show, and a prominent Republican for the other. Offer the first job to Oprah, if she doesn't want to do it, how about Barack or Michelle Obama? Joe Biden. Hillary Clinton? Lawrence O'Donnell or Rosie O'Donnell or maybe someone from CNN like Brian Stelter. And then privately tell the president that he could have the second job, permanently, no impeachment -- president for life, on TV. Everyone can be entertained by all the crazy shit Trump tweets. He can nuke anyone he wants because it'll just be on TV. I think he might go for it. His "base" would go apeshit. Let Trump be Trump! (Note: He has to resign the real presidency before he can have the TV job.)
  • I got a Chrome deprecation message in the JavaScript console when I post HTML in some new software I'm working on. Encoding it fixed it.
XML
Stats & Atts.

Welcome back my friends to the show that never ends.