• Kenneth OrmandyKenneth Ormandy, 8 years ago

    We also have walrus stickers! Let me know if you want one and we can mail it to you.

    9 points
  • Elliott ReganElliott Regan, 8 years ago

    This has been the missing step in my development setup. You just saved me the hassle of creating my own CDN.

    My one concern is...how is this going to be sustainable (you don'y need to use specifics). The thing that I am afraid of is creating a site that uses Surge today, and in three years when your funding runs out, all of my sites suddenly stop working. I try to use Google's CDN whenever I can because I know that they won't go down until the internet does, but it's hard to say for the little guys.

    Is there any failsafe plan for if you go under? Even something like "We will eventually build an easy export tool so you can upload all your files to your own CDN?" would make me feel safer.

    3 points
  • Brock Whitten, 8 years ago

    Thanks everyone for the positive vibes!

    Kenneth, you better make good on the sticker promise :)

    3 points
  • Mathias BiilmannMathias Biilmann, 8 years ago

    Co-founder of Netlify and BitBalloon here.

    We've been doing CDN hosting for static sites for more than a year now, and have both CLI tools, a public API and full web UI's.

    Not sure how to put this without coming across as a bit agressive, but:


    I did some tests, to compare our products. And surge sets these caching headers on EVERYTHING including your HTML files:

    Cache-Control:public, max-age=31536000

    This header tells any browser or intermediary cache, that it's ok to just keep a copy of the current file in the cache and use that for up to 1 year without ever checking back with the origin server.

    What this means is that if you put a production site on surge right now: there's NO WAY to ever make sure your users ever see any changes you make to your site within the next year. Even if you move to another host and completely redo your site, users will keep seeing the current version of your site until it happens to get pushed out of their cache!!

    Surge guys: make it super high priority to fix this! You can really, really screw some people over. If someone within a large company access a site with these cache headers, chances are their corporate proxy cache will store all the files locally and never check back with your servers until they expire... I realize it's a free product, but this could cause real problems for people!

    Apart from that the current SSL support is vulnerable to several known attacks: https://www.ssllabs.com/ssltest/analyze.html?d=bite-sized-apple.surge.sh

    2 points
    • Brock Whitten, 8 years ago

      Thanks for your feedback Matias. I wish I had seen this comment earlier as there is misleading information here. I'm happy to talk about it with you. Seems like an opportunity for us both to make our platforms better.

      FWIW - We have used the same max-age/etag combo used to serve over 4500 applications over the past 8-ish months - with no reports of users experiencing stale caches. So it hasn't been an issue in practice. If we take a closer look at the HTTP1.1 specification we can see why.

      Caching in HTTP has two basic components, Cache Directives, and Cache Validators. Its not uncommon to get these two confused but the difference is significant.

      Cache-Control:public, max-age=31536000 is the cache directive. This tells the client how long the asset COULD be fresh for and that it MAY store the file for that long it likes. In this case the maximum age is 31536000 seconds (1 year). This does NOT mean the asset is fresh for 1 year. It still needs to be validated before served.

      In section of the HTTP specification it describes the max-age cache directive as

      The "max-age" response directive indicates that the response is to be considered stale after its age is greater than the specified number of seconds.

      ETag: "dee9f721409ef9b0e61188266eca494d" is the cache validator. This is the information that the client needs in order to know if the file that it stored (based off the cache directive) is still fresh or stale. The client does this by passing the etag back through the If-None-Match header when it makes the next request. If the If-None-Match still matches the etag the server responds with a 304, and extends the max-age another year allowing the client to serve the cached file. If not a match, the server reponds with a 200, the new file contents, and a new eTag for revalidating in the future. The result is any given file will only be distributed once (even with HTML files). This is a more reliable way to do cache validation though it requires you to have confidence in your eTags (in our case we do).

      In section 2.3 of the HTTP specification Etag is described as a more reliable validator than modification date....

      An entity-tag can be more reliable for validation than a modification date in situations where it is inconvenient to store modification dates, where the one-second resolution of HTTP date values is not sufficient, or where modification dates are not consistently maintained.

      Many CDNs use Expires or Last-Modified as their cache validator which requires guesswork as to when the cache is invalid. So in reality, the opposite is true. If you want to be sure your users are never served stale assets, max-age/etag the more reliable approach. The Last-Modified approach could have less network requests though to compensate you cant cache HTML files. Personally, I don't like that tradeoff.

      HTTP caching headers are a known to be ambiguous and I don't claim to know it perfectly but our approach is deliberate and according to the specification and our use in production I would say sound. There should be no concerns of being served stale assets ever - much less a year.

      As for the SSL, we got that fixed. Thanks for pointing that out. Happy to send you some stickers to say thanks.

      2 points
  • David SimonDavid Simon, 8 years ago

    How about a pay-what-you-want type of pricing model?

    2 points
  • Pierre de MillyPierre de Milly, 8 years ago

    If this is what it says it is, I completely love it!

    2 points
  • Sacha GreifSacha Greif, 8 years ago

    The plan is that everything that surge does today will remain to be free. Mainly, that is unlimited applications with custom domains.

    I'm not sure if that's a good plan…

    GitHub can afford to give away hosting for free, sure, but that doesn't make it a great business model.

    2 points
    • Leon KennedyLeon Kennedy, 8 years ago

      I am wondering how this is going to be profitable too? Github hosts the projects anyway, so I can understand why they have free website hosting, but with Surge I don't?

      0 points
    • Brock Whitten, 8 years ago

      Sacha G. Thanks for sharing your thoughts. Do you have any suggestions for what would make for a compelling paid plan?


      0 points
    • Kenneth OrmandyKenneth Ormandy, 8 years ago

      Thanks for commenting Sacha, I’ve been reading your newsletter for quite a while and definitely appreciate the feedback. Custom domains are just such a critical part of publishing real projects we didn’t want to limit people from using them. We’re still thinking ahead too. :)

      Anyway—if you’d like some stickers, I’d love to send some your way: https://surge.sh/stickers/

      0 points
  • Chase GiuntaChase Giunta, 8 years ago

    This is great. Seriously. Thanks for this Kenneth, Brock.

    2 points
  • Jesper KlingenbergJesper Klingenberg, 8 years ago

    I love that the "landing" page is simply a cleverly created Medium Post - BIG thumbs up!

    1 point
  • Ed AdamsEd Adams, 8 years ago

    Seems interesting! I'll try it out on my next project.

    1 point
  • Maria SmithMaria Smith, 8 years ago

    I'm also worried whether this CDN would be sustainable. I usually use http://cdnsun.com/ and upload all my files to my own CDN, it makes me feel safely. But it's useful to diversify the CDNs.

    0 points
  • Joshua SöhnJoshua Söhn, 8 years ago

    Very aweome! Tried it but I don't get clean URLs though

    0 points
  • David DarnesDavid Darnes, 8 years ago

    Just tried it this morning, amazingly simple. Almost too good to be true, which is why myself and others are asking the 'cost' question.

    I think everyone is asking not because they don't want to use it unless it's free, but because it's good enough to be paid for. A services that allows me to bust up a live site in a couple of commands is worth the money, at a reasonable price of course :).

    0 points