9 comments

  • Jonathan CutrellJonathan Cutrell, over 5 years ago

    He's right... if everything stays the same.

    But nothing stays the same, and eventually JavaScript content will also be archivable, useful, etc.

    Also, what exactly does he think is "useful"?

    Does he think that paywalled or sign-in-required applications are useless and "shouldn't" be interacted with? I can't cURL your internal application. Well, I could, I'd just have a problem getting anything from it.

    IMO, it's a silly argument that only matters until we overcome the technical hurdles. Everything is a continuum. Sure, maybe the stuff you build today won't be on archive.org. But don't make the silly mistake of thinking that everything will stay the same. And make sure you have a good understanding of useful, valuable, and worthwhile.

    6 points
    • ╯‵Д′)╯彡┻━┻ .╯‵Д′)╯彡┻━┻ ., over 5 years ago

      Not everyone surfs the web with Lynx. Whilst I understand the need for websites to stand the test of time and be "curlable", there are still many 'appy' websites out there that fall back to plain HTML when we need it. I'm a big fan of acrhival services like Pinboard which I have been running now for 3-4 years under an archival account ― it has terabytes of raw HTML data that I can peruse at any time and do a full text search for any page. The bulk of those pages are very JS dense pages that somehow, through some wizardry on the site's backend; have managed to preserve some text for me to read. GoogleBot struggled with this not so long ago, but can now crawl fragmented URIs with a hashbang in them, as if the page was a normal HTML page. I suspect GoogleBot is a stripped down Chrome that renders the page and does a scrape. Infact GoogleBot has been proven to execute JS. SEO and search aside, there is (hopefully) some server black magic that detects browsers like Lynx and then serves us some 'neckbeard text'. Webapps which are not doing that are probably not worth your time anyway.Also noteworthy:

      http://christianheilmann.com/2011/12/06/that-javascript-not-available-case/

      Also, there are great proposals by the W3C to get webapps working without the need for JS. A lot of the behavior you see now in webapps could be done with simple HTML tags.

      0 points
  • Derryl CarterDerryl Carter, over 5 years ago

    Depends on whether or not you feel your content should be archived. For written content, sure, but for real-time web applications -- why would you want it to exist 50 years from now?

    4 points
  • Roma KuraevRoma Kuraev, over 5 years ago

    The truth is, universe will go on just fine without his bitching. /rant

    1 point
  • Florian MorelFlorian Morel, over 5 years ago

    Well, google has been crawling javascript for more than a year. Also, server-side rendering.

    0 points
  • Andy BaudoinAndy Baudoin, over 5 years ago

    I get the point Tantek is making here and it's true for content that should be archived. But I would argue that most of the web using JS frameworks are mostly non-crawlable apps where much of the content/functions are behind a user login.

    0 points
  • Jeremy StewartJeremy Stewart, over 5 years ago (edited over 5 years ago )

    I love this. Title is clever, style is inflammatory, points are valid.

    To be fair, I also love AngularJS. Flexible, powerful, neat as heck. Definitely an option for certain projects. Like anything else, consider your requirements first.

    0 points