Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How a banner ad for H&R Block appeared on Apple.com without Apple’s OK (arstechnica.com)
405 points by ben336 on April 8, 2013 | hide | past | favorite | 169 comments


Facebook eventually addressed the issue by making the site accessible over HTTPS—though, as the authors of the 2008 paper note, HTTPS can be a "rigid and costly" solution.

This same excuse has existed for about as long as HTTPS, which dates to Netscape Navigator 1. Is it still that "rigid and costly"? Is there a technical reason that this is an unsolvable problem?

Considering the increase in computer and network speed over the last decade and a half, it seems strange that this would still be the case. Perhaps it's just that without pressure from competitors there is no pressure on the sites to solve it?


I don't know where the authors got "rigid and costly", but I think it's just BS... 2008 was a long time ago though, so maybe it was slightly more challenging then.

When Google went over to SSL for Gmail, they said "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. " http://techie-buzz.com/tech-news/google-switch-ssl-cost.html

That, coupled with the general availability of cheap and free CA signed certificates makes the claim pretty baseless.


I've gotta say, I haven't played with HTTPS sites all that much- does it mess up caching?


It used to - browsers wouldn't cache anything by default if it came over https (you could send a 'cache-control: public'header to get it to though). As far as I can tell, these days almost all browsers have changed their policy so you don't have to do that now.

It does completely disable ability for your ISP to have transparent proxies caching content a before it's sent to you, but the security and privacy gain is worth the trade off.


What about for your server-side stuff? Like caching in Varnish?


Varnish doesn't do HTTPS... so you have to put something that does SSL in front of varnish, which sits in front of your real web server:

SSL -> varnish -> backend varnish -> backend


You can do the SSL before Varnish, then it makes no difference.


Does that work? Each client has its own key, so your caching goes from "one cached copy for everyone" to "one cached copy per client", right? Or am I missing something there?


No, you could still have one cached copy for everyone. The SSL termination happens before the user's request gets to the caching server. As far as the cache is concerned, it is a regular http request. The only problem is you cannot have generic caches that live closer to the end user; the cache has to be controlled by the person controlling the SSL termination.


So you're talking about caching the cleartext and then encrypting it for each client, which is totally doable.

I (possibly mis-) understood the original comment to be claiming that you could cache the ciphertext, to which I just wanted to make sure I wasn't missing some huge piece of understanding.


Yes.

There's work underway at the IETF to define a standard for signing pages and included resources, so that caches can do their business while still providing the same authenticity features as an HSTS-compliant site. I can't seem to find a reference to an RFC right now, though, unfortunately.


Is there anybody other than StartCom offering free signed certificated?


Serious question: why do you want it to be free?

I paid $10 for an SSL cert. Isn't that low enough?


There are several billion people for which that's significant, whether because of where they live, or their age, or access to credit cards to make purchases online easily. $10 more than doubles the minimum cost of having a website on your own domain.


Thank you. I have said this many times. I have 10 dollars, but unlike the US or some other developed country, I can't just ship it overseas.


I guess we need a way to buy certificates with bitcoins?


Namecheap sells SSLs certificates and accepts Bitcoin, at the very least.


It saddens me that every domain registration does not come with a free domain-validated SSL certificate.


I think this is what's going to drive the adoption of DNSSEC: free, DNS-validated SSL certs.

IPv6 should address the other problem - namely, that SSL certs are per-ip, not per-hostname, which makes hosting multiple sites a pain with IPv4. Or SNI could work, once Windows XP is truly abandoned.


Could you explain both those points in a bit more detail - I am a bit fuzzy this morning.


Historically, you can't serve multiple sites from one IP address (i.e. named virtual hosts) and use HTTPS. The reason for this is that the hostname of the site is included in the HTTP request from the client:

  HTTP/1.1 GET /
  Host: mysite.com
By the time the server has decoded and read this header, you have presumably already started the secure connection, so the server has to have already selected which certificate to use for the session.

Workarounds are to have multiple IP addresses on your box with one cert per IP, or run the server on multiple ports with one cert per port. In both cases this enables the server to know which certificate to use from the underlying connection properties, and not wait for the encoded traffic to start arriving.

SNI (Server Name Identification) is an extension to TLS (Transport Layer Security) that essentially adds the hostname into the SSL negotiation, so this cert can be selected by the server in advance. It made it into OpenSSL implementations in the mid-2000s and is reasonably widely adopted.

Legacy libraries, Internet Explorer <=7, and Windows <=XP won't support it so it's not quite ready for mainstream use. Give it 5 years or so...


The sibling poster does a good job explaining SNI. The shorter version is that without SNI all you know when establishing the secure connection is the IP address, so you can't do name based virtual hosts.

I'm not an expert on DNSSEC, but the idea is that there is a chain of trust going back to the domain registrar. If a receive a signed DNS response, and everything verifies, then I know that it comes from the person who registered the domain. I can't add a signed entry for example.com, so if I received a signed DNS response for example.com, I know it ultimately originated (with possible caching like normal DNS) from whomever registered example.com.

You can then add what is essentially a TXT record to the DNS entries for a domain that is the fingerprint of an SSL cert. If you receive that as a valid dnssec response, you know it can be trusted.

Essentially the dnssec infrastructure replaces the CA infrastructure.

You can do the same thing with ssh key fingerprints too.


aha - I shall have to go read up on dnssec.

edit: cdjk - thank you - I have to edit-reply as there appears to be an increased time delay on replies. Maybe its me. Bought it.


I recommend DNSSEC Mastery, by Michael Lucas:

http://blather.michaelwlucas.com/archives/1640

It's not out yet, but an nearly-complete draft is available on leanpub (with updates once he finishes copy editing):

https://leanpub.com/dnssecmastery

edit reply: I think the thread is nesting to far...


gandi.net provides a single-address SSL cert free-of-charge for a year with a domain registration or transfer. See: http://www.gandi.net/ssl :)


For the first year, and only the first year. You shouldn't be smiling about the recurring fee.


Actually, they give it to you for the year when you renew as well. So if you renew yearly, you get a year ssl as well.


Also their basic SSL Cert is only about $16 USD. Very reasonable.


They're also a PITA to setup. A lot of amateur webmasters have only just figured out FTP and wordpress. The whole CSR , key gen ritual is another big barrier.


It's gotten a lot easier - especially for AWS users and being able to have it dealt with by the load balancer.


> Serious question: why do you want it to be free?

If I wasn't at least a little bit clever with my spending, I would have a lot more difficulty paying my rent.

Why would I want to pay if there is alternatives offering the same product at no cost?


> Why would I want to pay if there is alternatives offering the same product at no cost?

Because free isn't necessarily a sustainable model given the current CA environment. Any server can generate its own certificate, but this does little to verify the identity of the server you are connecting to.

My point is that CA's provide a service that can't reliably be accomplished for free (yet - the CA model has many of its own issues). If you can find one for free, I would be lead to ask "what are their motives for providing this service to me?"


It would be nice to have the option of encryption without identity validation. Most people interact with most sites without it today -- over HTTP. The only reason we can't encrypt all those connections is the big scary error message browsers throw up when you do so without paying a CA for their signature.


It has nothing to do with paying money, it has to do with reputation. The fact the most the companies (CAs) that are willing to put their reputation on the line for you will do a bit of checking to make sure you're who you say you are, and that this process incurs some overhead, is a byproduct.

Let's put it another way, would you really trust that you're talking to https://www.amazon.com if it's trivial to get a cert for www.amazon.com[1] that's signed by a CA that the browsers include and trust[2]? How is it any different if the browser doesn't tell you the current cert is of dubious reputation?

[1]: It is, I could generate one right now using openssl.

[2]: It's not, that's why the system works.


Web-of-trust schemes are effective. The usual way we find out about stuff like this happening is word-of-mouth. Infrastructure for the process would automate it - after all the only thing you need is a way to say "this is what the certificate's hash should be".


I'm not asserting that there's no other way to do it, just that getting rid of the popup that says the site is untrusted is not a solution, nor even a step in the right direction, IMHO.


That wouldn't solve the problem in the posted article at all, though. The ad-inserting proxy could then just un-encrypt and re-encrypt.


The point of certs is to stop an attacker just sitting in the middle and handing out their own cert and you not being able to tell the difference.


Their motivation is to sell high assurance certificates to people who have been enticed by the free plans.

It costs them nothing more than a few seconds of server time to produce a signed certificat for me.


Not that I could find, but NameCheap sells Rapid SSLs for about $10 that offer unlimited reissues.


1024bit certificates are only valid until December this year. Then we have to use 2048bit, destroying the benefit we get from faster computers.

You need SSL accelerators or pretty fast hardware to handle more than a few thousand SSL handshakes per second on a single machine. This is one of the places cost comes from.


You need SSL accelerators or pretty fast hardware to handle more than a few thousand SSL handshakes per second on a single machine.

That's not true - or rather it conflates two unrelated problems.

1) HTTPS/SSL (even with larger certificates) isn't computationally expensive:

In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.[1]

2) The HTTPS initial handshake is slow. However, this isn't because of the power of the machine, or the certificates size. It's cause by round-trip overheard (See [2]). The solution for that is to reuse HTTPS connections as much as possible (which both your server and the browser should try to do for you anyway. But doing things like using the same hostname can help)

[1] http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht...

[2] http://pic.dhe.ibm.com/infocenter/tivihelp/v2r1/index.jsp?to...


Re-using HTTPs connections is only helpful when you have repeat requests from the same user. It's actually counter productive for tracking beacons or ad impressions, where you rarely see the same user twice in rapid succession.

For web applications the overhead is much less significant. Both because you can re-use connections, and because few web applications can manage 20k+ requests per second on low end hardware.


> destroying the benefit we get from faster computers.

That's precisely the point, though.


2048bit is not all that much harder to encrypt than 1024bit you might go from 1% CPU to 2% CPU but that's about it.


HTTPS is not rigid and costly for a single connection. It's rigid and costly because it prevents simple options for caching like sharing google hosted jquery. You can still use a CDN, but then you have more complexity in managing your certificates and pay 1/3 more (Cloudfront pricing).

HTTPS is manageable where needed for security. Sleazy ISPs are making it necessary even where security is not a concern -- for example, viewing Apple's homepage.


That's not true. Google seamlessly supports both HTTP and HTTPS, and many other sites do too through simply using // as the protocol.


Using 3rd party CDNs, like Google Hosted Libraries, are often slower anyway.

http://www.stevesouders.com/blog/2013/03/18/http-archive-jqu...


They are not making it necessary, only convenient. You're not harmed by seeing a few ads.


This point of view assumes that the injected code is harmless and never causes compatibility issues when viewing a website. The moment a JavaScript conflict occurs and Apple loses a sale or the ability to buy a product online is hindered, I think that there will be parties who certainly feel they are being "harmed" by this process of manipulating the content being delivered to end users.


Yes, this technique could potentially be misused. You can say the same thing about any technology.


This thread illustrates exactly the problem with HTTPS.

The proponents of HTTPS everywhere try to sweep the significant downsides under the rug, whilst others are spreading all kinds of unfounded FUD about HTTPS.

The bottom line is that "your mileage may vary". For some applications, SSL is trivial, and there is no excuse not to do it. For other scenarios it's a nightmare with all kinds of undocumented complications.

I'm currently working on providing SSL for a SaaS service with various client domains on AWS (i.e., with a limited number of IP-addresses). Doable, but far from trivial or inexpensive.


an annoying thing about HTTPS is that it requires you to serve each domain name from a separate IP address, and that can be somewhat costly.


No it doesn't. You can use multi-domain certificates. Works like a charm.

Also, I talked to GlobalSign recently, and they had a brand new solution that used SNI with an automatic fallback to multi-domain certificate for browsers that don't support SNI.


multi-domain certs are brittle, because you have to keep them up to date with all the domains you're serving.


That doesn't make them brittle, it does make updating them a potential single point of failure. But that goes for multiple things if you're working from one single IP address, so accounting for those is part of the tradeoff.


If you ignore IE+Windows XP, you can safely use SNI.


It's any browser on XP which uses the Windows Crypto API, most notably probably the second most used browser, Chrome. As long as XP is around, we're going to need One IP address per domain if we want to do SSL.



... and the stock Android browser, IIRC.



True, but just like XP, there are lots of pre-3.x devices out there...


That would be approximately 50% of them, btw.


I would love to...Who's with me? Anyone? Anyone? Bueller?


Alas, while I feel comfortable not XB testing WinXP because it's less than 1% of users (or whatever threshold you want), I'm not quite as comfortable using that argument with SSL certs. Unfortunately the two solutions (SNI and IPv6) are unlikely to work on XP, so the only hope is to wait until all those computers are replaced or upgraded.


This is why widespread IPv6 can't come soon enough. You can give whole subnets to each subdomain if you feel like it, without paying a cent...


SNI has been out for a long time. Windows XP with IE < 8 doesnt support it, but at this point, Microsoft has practically abandoned it so it seems reasonable to let those users suffer and tell them to use Firefox/Chrome.


What does the message they see say? Presumably invalid cert before you get to being able to say anything. Its OK if you just have part of site HTTPS but difficult if it all is?


>Windows XP with IE <= 8

FTFY.


I have no background in crypto stuff, but the main issues seem to be (a) SSL virtually requires buying certificates so small/hobby sites won't use it, (b) it involves multiple round-trips and latency is never going to go away unless someone invents faster-than-light communication, and (c) part of what makes encryption work is that it's computationally hard on today's hardware; when it's not, we move to a different algorithm that is, so the computational cost to using encryption versus not using encryption will always be there.


  > SSL virtually requires buying certificates so
  > small/hobby sites won't use it
startssl.com offers free SSL certificates valid in almost every browser, good for one year. I've been using them for my own site with no problems. Certificate cost is no excuse for continuing to use unsecured HTTP.

  > it involves multiple round-trips and latency is never
  > going to go away
Assuming reasonable protocol design (somewhat problematic for HTTP/1.0, better in /1.1 and SPDY), additional round trips are only a factor for the initial connection setup. Later requests can re-use the existing SSL session.

  > part of what makes encryption work is that it's
  > computationally hard on today's hardware
Encryption is based on being computationally expensive for a third party. The computational load on the communicating parties is negligible, particularly with modern CPUs. From http://www.imperialviolet.org/2010/06/25/overclocking-ssl.ht... , Google experiences CPU overheads of less than one percent:

  > On our production frontend machines, SSL/TLS accounts
  > for less than 1% of the CPU load, less than 10KB of
  > memory per connection and less than 2% of network overhead.


> startssl.com offers free SSL certificates valid in almost every browser, good for one year

startssl certs aren't trusted by my browser (or maybe the os?), so ssl's identity authentication for startssl is void. It's still better than no cert since ISPs can't detect what certs my browser trusts, thus wont make stupid moves, probably.

If you can use more widely recognized certificates, please do.


http://en.wikipedia.org/wiki/Comparison_of_SSL_certificates_... Claims StartSSLs Free certs are valid in IE>7, Firefox>3, Safari and Android>2.1. I can personally verify they're valid in Google Chrome.

Which browser/OS combo are you using?


I've debated purchasing a wildcard certificate from them but was afraid of having users whose browsers didn't trust the root CA. May I ask what combination of OS and browser you're using?


> startssl.com offers free SSL certificates valid in almost every browser, good for one year

True, but the last time I tried them the UX was a nightmare.


I've just started using them and yes, the UX still isn't great.

On the other hand, they are two orders of magnitude cheaper than Symantec (nee Verisign)...


> part of what makes encryption work is that it's computationally hard on today's hardware; when it's not, we move to a different algorithm that is, so the computational cost to using encryption versus not using encryption will always be there.

It's computationally hard to crack. To be useful, it necessarily has to be easier to use than to circumvent.


https is no longer "rigid and costly" in a technical sense On a recent website I did my own tests and saw a 2.5% increase in request speed (between http & https). Basically if you aren't a super huge site you aren't going to see any difference. I use https on all my sites, it's just simpler in the long term and you circumvent a lot of problems with security.


It may not be considered costly when your advertisers pull out because their content is getting replaced via HTTP.


TLDR: ISP injected script tag ads.

I heard on the latest This Week In Security that Comcast was apparently not just injecting JS, but injecting bad JS. Meaning that closures weren't used, so name collisions could occur with the actual sites users were visiting.

For this reason, I can see HTTPS becoming standard, even for public, non-logged in users. I'm in the process of updating my site to be all-HTTPS and recently got confirmation (as much as one could ever expect) from Google there's no SEO penalty (http://goo.gl/sbtxq).


If your site has a login page anywhere on your domain, I think that really every page should be served over HTTPS, and it should use HSTS to make sure the browser knows it should always be accessed that way.

Even if your login page posts to https, if there's a man in the middle somewhere between your server and user and your login page is HTTP, they can alter the login page to post somewhere else. If your login page is HTTPS, then they can change links to that page to something different.

Only with HTTPS for the entire site is it safe from those attacks. It is still vulnerable if there is a MITM the very first time the browser has ever visited the site, but it's a whole lot better than always being vulnerable!



To be clear, Comcast wasn't injecting ads, but bandwidth overage messages. Still terrible though.


I would like to see https everywhere, but the cost of certs is still an issue.


Thank you very much for TLDR. It is horrifying how many life-hours are spent for such bullshit stuffed articles...


Thinking about ad injection, it is actually quite scary what a ISP can do. Not only is it easy to display ads (or possibly even malware), but even worse my ISP is installed as a default CA by Firefox. So that they can even inject into SSL connections, with the only "warning" that the certificate was signed by the ISP...


Who is your ISP, and what CA cert do they have installed?

The thought of an ISP having CA certs that are a part of default installs is unnerving.


Telekom ( actually T-Online, the German ISP branch). The certificate is identified as T-Systems (and I just found another one, Deutsche Telekom AG). Additionally looking through the certificates I found at least Swisscom who appear to have both a CA and an ISP, and AOL. But this is certainly not an exhaustive list but just the ones that caught my eye scrolling through the list of CAs.

[EDIT]And for the added 'told you so,' the German parliament uses precisely this certificate https://www.bundestag.de/


Here's where it was introduced: https://bugzilla.mozilla.org/show_bug.cgi?id=378882

Interesting comment on that thread:

> This CA was singled out as a CA that signed an excessive number of intermediate authorities (252) which together only have issued 4164 certificates in EFFs talk at C3. This is, by far, the highest number, the next contender is GTE Cybertrust with 93.


The bugzilla thread is an interesting read. And to be fair, the issue with the thousands of certificates is explained in it. It appears that the certificate is used to sign DFN, which in turn signs certificates for most German universities.

Btw, Video of the 27C3 talk in question: https://www.youtube.com/watch?v=DRjNV4YMvHI


Wow, reading through that bug makes me glad I never have to deal with Mozilla for anything time sensitive.


Well, you can at least remove those from the trusted cert list manually, but it seems like insanely bad juju for ISPs to have their CAs installed with the browser. I wonder if that's worth opening a bug on the Firefox tracker.


Well, considering Mozilla is who put them in there ...


I always go through and delete government and ISP certs from my computer's cert store.

On OS X, you can do this from Keychain Access.


If you don't need secure access to gov websites this is nice

In my case it was the opposite, a gov website mandated the installation of certificates for its use


That is why you always tell the installer that you do not have a PC or a Mac. Never let them touch your computers.


I'm not quite sure how you got the idea that he let anyone touch his computers out of his post: he said a default CA by Firefox.

There are a number of ISPs that are also CAs that are installed in many browsers by default.


Original (and better imo) article here: https://news.ycombinator.com/item?id=5486006


My problem with HTTPS is Google: They push it on every front, but - for various reasons - consider HTTPS and HTTP different pages, meaning you do not get link juice from any HTTP links if you're site is HTTPS only.

(1. don't listen to people telling you otherwise, it's an expensive experiment 2. redirects do not transfer all the juice, they count as links themselves, from my experience it's like not having external links to your site at all 3. If you do not depend on Google b/c you're SaaS, go for HTTPS only)


Where are you getting the data that you don't get Google juice for HTTP-to-HTTPS links?


From first hand experience. Then lately there was a discussion on /r/seo I currently can't find with a link to Matt Cutts who said redirects are handled the way links transfer juice.

[edit] People extrapolate that 301 are different because Google tells them to use 301 when moving pages.


I remember reading about someone with neighbours that were stealing their wifi. You can do much more interesting things with a proxy server than just inject ads: http://www.ex-parrot.com/pete/upside-down-ternet.html


Is Ars just ripping sites now? Although now changed, https://news.ycombinator.com/item?id=5505890 pointed to an Ars article from yesterday (that reached the front page) that was basically a copy-paste of an SE question


Ars has been featuring Stack Exchange questions, with their permission for quite some time. I think it's some sort of partnership. http://arstechnica.com/author/stack-exchange/


It's one of the worst features that they run. I've gotten good enough at guessing which is "Ask Stack" from the headline that I don't click them anymore, though.


The writer reached out over twitter. I responded with my info on the subject and another individual pointed him to Henkel's blog.


This is probably the best counter-argument to the best counter-argument that gets leveled at the people promoting HTTPS-everywhere. People like to say that HTTPS everywhere would break transparent cacheing by ISPs. After all, HTTP is designed to allow caching proxies to exist inline and still supports dynamic content gracefully (er, somewhat, anyway).

But in fact the same features that make transparent caching easy make this kind of shenanigans easy. There are tons of companies in this space now. Not just people like NebuAd and R66T, but lots of "subscriber messaging systems" like FrontPorch (which I've heard sells messaging data for behavioral advertising) and PerfTech (which has assured me that they do no such thing).

This should be an easy way to push back one of the last "real" arguments against using HTTPS everywhere. There's no excuse not to be running your site on HTTPS all the time - it protects you and your users from all sorts of mischief for a minimal overhead.


It's getting to the stage now where I think domains should be sold with an SSL certificate as standard (minimal vetting, no warranty) - just enough to provide encryption, rather than treating it as an optional extra.


One could argue that DNSSEC is a variant of this - put your SSL certificate in a TXT record in your DNSSEC-signed domain and you no longer need a certificate authority system to sign the certs. Now you can self-sign the cert and get it for free!


If the best argument in favor of HTTPS everywhere is that it will prevent your ISP from showing you ads, the movement is doomed.


The point is not to stop your ISP from showing you ads. The point is to stop your ISP from interfering with your traffic in transit.

If I ran a website with ads, and someone was stripping those ads to replace with their own ads, I'd be annoyed. I'd be amazed if that's something Google would tolerate. We've seen plenty of stories from people saying "Google closed my ad account and froze all my money!!!" so I hope they do that to this ISP and or the company serving the ads.


But you already don't stop your ISP from interfering with your traffic in transit. So why is it a big deal if you continue to not do it? Or is seeing a few ads more important to you than, say, all of your email?


People don't stop their ISPs from tampering because they have a reasonable expectation that the ISP won't tamper.

But, now they've seen their ISP tampering those people might switch on encryption for their email, and everything else.


That's an incredibly naive expectation. ISPs have been replacing error pages with their own search pages serving ads since the 90s.


Ah, yes, you're right. Sorry.

For what it's worth I was grumpy in those situations too. I wrote polite letters. Where possible I opted out.

But your point - this kind of this happens all the time, and has been going on for years, and noone is doing anything to stop it even though it's wrong - is taken.


Holy cow. It seems this has had a direct effect: they're no longer injecting javascript into webpages. I just tried Amazon, eBay, and a few others where the script injection used to be present, and it's no longer there.

I absolutely can not understate just how happy I am about this.


you mean overstate? :)


Absolutely; thanks for the correction. It seems I was just a tad too excited to post, haha.


HTTPS will not prevent this: the ISP can issue their own CA to their users and then decrypt/encrypt https as it passes through them. (Many corporations already do this). What will prevent this is legislation and/or competition.

It amazes me that US Internet access has very little of either. All the drawbacks of monopoly Internet, with all the drawbacks of unregulated Internet.


I believe ISPs can not have a transparent HTTPS proxy without the "invalid certificate" browser warning. ISP users would have to manually trust their ISP's CA.


Or have it added to their chain beforehand. An ISP could trivially include this as part of their "welcome pack" installer CD or the like.

Would that silently affect people like us? No. Could they do it to all their non-technical customers? Absolutely.


Not going to help if their non-technical customers are using ipads or similar though, which non-technical users tend to like


IOS gives you a friendly and official-looking "accept this certificate?" dialog when you connect to a router that offers one. Non-technical users will accept and proceed without blinking.


Slight correction, ISP or Corporation can run a HTTPS proxy. In effect, your 'HTTPS' connection is to the IPS/Corporate proxy only.


I'm confused as to how what you're saying or what parent is saying would work. The trusted CAs live on your computer, and should not be susceptible to tampering by your ISP. And how can your ISP or corporate network set up an HTTPS proxy like you're suggesting without triggering a warning to the user that they are not connecting to the SSL certificate's specified domain?

Is there something about SSL/TLS that I'm fundamentally misunderstanding?


Nope, you're right. The ISP would have to install the CA certificate on every device on the network, which is a nontrivial task.

Furthermore, if the ISP has done that, they don't need you to go through a proxy. Your connection is already going directly through them.

Edit: However (as you can see by some of the responses in this thread), there's certainly the possibility that your ISP itself is an actual certificate authority recognized by browsers. That scenario is indeed quite worrying.


It's also a pretty trivial task if you control the user's routers and can give them installation disks to make "the internet work". As corporate and university wifi shows, people will willingly accept new certificates required to join hotspots; they'll also do it on their desktops without blinking.


Indeed, although I don't agree "internet-enabling software" is trivial in terms of engineering and support costs, considering the range of devices today. But mostly I just wanted to clarify on the point that interception is not fully transparent: that the ISP does need to compromise every device that connects to the network.

But I do agree with your original point that to the extent possible, there should be legislation (if there isn't already) against intercepting TLS-encrypted connections of ISP customers, in cases where the ISP is also a browser-approved CA or is actually willing to distribute its own CA cert.


It should be made illegal if it isn't already. Since "any site" did not consent, doesn't this injection really "change" their site?


Maybe they should lose their "common carrier" status if they mess with the content? Then they would be responsible for whatever is said on the internet, which would probably dissuade them from doing it.


Most ISPs in the US don't operate under common carrier regulations. Even the ILECs (essentially the only common carrier data networks) sell their internet services via subsidiaries to avoid it.


Oh, interesting. What protects them from libel suits and the like, then?


Libel for? Cases like this where they are making Apple look bad? I think that would be a stretch. Typically ISP terms of use say they can do pretty much whatever they want (see the Computer Fraud and Abuse Act/The Whole Aaron Swartz Thing). In theory if you don't like it you can take your business elsewhere.

This "take your business elsewhere" competitive environment is supposed to foster innovation blah blah. Many would say this is BS and more regulation is needed because of a duopoly. As a competitive ISP I have to disagree, but it is true that the duopoly providers spend more on advertising so most people aren't aware of alternatives.

One could argue that rewriting pages to insert JS makes a derivative work or something and that gives Apple grounds to sue because of copyright, but that's tough as ISPs are supposed to be finding ways to inflict the Emergency Broadcast System on users and JS insertion is generally less obtrusive than hijacking all HTTP/HTTPS until the alert clears.

My perception is that most ISPs avoid this kind of thing because we don't want to give the FCC any more excuses to mandate things like "Net Neutrality" with poorly understood policy consequences.

On the other hand inflicting one of those DNS-hijacking special offers systems can increase revenue from the typical residential user that wouldn't care by a few percent so there's always a bit of business pressure.


i believe it counts as copyright infringement. if you modify content in-transit, you're creating a derivative work.

not sure if this has ever been tested legally though.


In such a case wouldn't any form of proxying be copyright infringement, even without modification be copyright infringement?

There are also other applications of modification of pages in transit, for example mobile connections often proxy a lower quality version of images (done transparently by the ISP) to save bandwidth. Also some proxies may block videos , ads etc which is as much modification as adding them.


I have always wondered how long it would take for this sort of behaviour to kick off.

This is to the internet what global warming is to the earth... well that might be too far, but this is high tech pollution at its worst.


zmhenkel's Reddit comments if anyone wants first person accounts:

http://www.reddit.com/user/zmhenkel


Wouldn't you know it, the CMA Communications (the ISP mentioned in the post) website is not accessible via HTTPS. "View My Bill" and similar link you off to a third party domain.


Yet another reason blindly running javascript from unknown parties is a bad idea. Whitelists for progressive enhancement I want should always have been the default.


If the ISP is injecting Javascript, they can just inject it as coming from the same domain.

I don't like "force HTTPS everywhere" but these jerks are forcing it. It sucks, but it sucks less than this.


@begurken you've been hellbanned since this post https://news.ycombinator.com/item?id=5466310, no idea why

https://en.wikipedia.org/wiki/Hellbanning


> The idea has a great number of pros, and almost no cons.

1. Certificate problems with embedded devices.

2. Much harder to control what's going on with your network.


We have had a similar case publicly exposed here in the UK a little while ago with Phorm and British Telecom:

http://www.bbc.co.uk/news/technology-13015194

There is a good chance that such practices may be found to be illegal in a UK court (Regulation of Investigatory Powers Act primarily with some discussion about applying the Data Protection Act or Computer Misuse Act) but they haven't been tested yet. Both companies very quickly stepped back from the 'trial' they were conducting when it became clear there might be public support for a test case.


An interesting article but was overly verbose. Could have said the same thing in half the space.

However, a big FU to Arstechnica for prostituting the name of Apple to get more visits to the article. The headline did not need to imply that Apple.com was hacked or that Apple was somehow unaware of what's happening at their site.

It's sleazy journalism and beneath the usual ethics of Arstechnica


Does anyone know the CFAA implications here? My system is a "protected" system, this would clearly be unauthorized access.


As exciting as this is to be posted on a high-volume website, I honestly doubt CMA is going to change their practices on this issue.

If anything, the Acceptable Use Policy change on the 4th was a sign that they'd be reluctant to change their stance on this issue at all. They honestly don't care.


They might change their tune if they get a letter from a lawyer or two. Can't imagine Apple, for example, likes the idea of third-party ads overlayed on their site.


I could see google bringing a hammer down on them if its true that they are overwriting ad space on certain websites.


I'm terrified to let them tamper with it but Congress really needs to make laws that regulate ISP behavior in the USA. They will never do it on their own.

The problem with such a bill is it will have a dozen riders for very horrible things.


The "root" of the problem -- on what marketplace are these display ads being sold?


Any of the hundreds of different advertising services that feed into the exchanges that are carried by the major networks. There's likely no direct connection between who sold the ad and who made the deal with this ISP to let them serve that network. H&R Block probably had no idea their ad ever appeared on Apple's site.

http://blog.inuvi.com/wp-content/uploads/2011/01/LUMA-Landsc...


Yeah, but somewhere H&R Block purchased the inventory. You don't just generically purchase a 300x280 banner ad without regard to where or how it runs. It has to be connected with some sort of property or impression.

It could be a re marketing add, or a demographically targeted ad, but in that case the buyer still purchased it somewhere and some company or company is responsible.


Ex: ISP signs a contract with "Google Ads for Publishers" to carry their display ads. H&R block buys a retargeting campaign through AdRoll. AdRoll runs this campaign by bidding on the matching cookies through AppNexus. AppNexus feeds into DoubleClick which serves Google's ads. H&R block shows up on the Apple website. H&R is 3 companies separated from their buy and where the ad is shown, and never signified any intent to advertise with an ISP or with Apple. Tracking back the ad to where it was sold gives you AdRoll, which wasn't complicit in the scheme either. Google is the "root" problem in the fictional example.


Did anyone else have trouble hitting the back button after that article? I find that equally skeezy.

Edit: It seems they've mapped command-back-arrow to a non-default action. Not cool.


Anyone else find it interesting that R66T is vaguely read as "ROOT" -- this is some kind of cruel joke, right?


Very interesting that r66t.com doesn't appear in the 2 most popular adblock+ block lists!


So an internet provider inserted ads into web pages and two bloggers blogged about it.

I hate this ultra-low signal to noise style of writing anyway, but using it for a tech piece is more than ridiculous. This isn't a 1970s western movie, nor does it appear in the NYT arts & culture section.


Did he complain to the FTC?


This is quite common in China. Even the largest ISPs do it.


why wouldn't all ISPs do this for the economic incentive? Is advertising going in this direction?


One thing is that it can ruin the UX of the page. I was constantly hitting the overlays while using an iPad when all I really wanted to do was advance the page I was reading.


If my ISP did this I would switch instantly.


The problem here is that in our city, we have four ISP's. My parents live in a suburb close to the edge of the city. AT&T, Comcast, and the others don't really provide any services out there.

I say "not really" because ATT provides phone service, but not DSL or U-Verse.


And when they all do it, or no other ISP will service your area?


`ssh -D 8080 micro-ec2` and proxy through localhost:8080 till it's sorted out.


Exactly! No need to switch ISPs after all. (Though personally I add the -N flag so that the tunnel is clearly separated from the remote shell.)


Lots of websites block requests from ec2, though. A notable one is Stackoverflow.


Then it's back to Fidonet, eh?


It's a brave new world out there. Well, I knew this would happen eventually, and I got a lot out of the internet while it lasted. Shit, the last 15 years of my life have been grand thanks to the net, but now it's time to kiss it goodbye.

I, for one, welcome our new corporate master feudal lords.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: