password generator progressive-web-app

 

Back in 2015 I wrote a javascript-based password generator that I turned into a drag-and-droppable bookmarklet, which is a tool that I still use today.

However, as we move forward I found myself signing up for more and more accounts using my mobile device, which doesn’t have support for bookmarklets in the same way that a desktop does.

I also got fed up with having to sign into my password manager on my mobile every time I wanted to generate a new password.

So, after seeing a talk at Brighton PHP on progressive web-apps by Rowan Merewood, I decided to create a new password generator, specifically for mobile devices.

Check it out here. Source code available here.

Check out my original post from 2015 here.

Here is the original bookmarklet, drag it into your bookmarks to use it: [PwGen].

See the PwGen app on securityheaders.io.

Screenshot from PwGen on a mobile device

phones: don’t be complacent with your trust

We trust our phones to connect us to the world, and they allow us to prove our identity.
They are an authentication factor in themselves – proving we have the phone, is one measure of proving we are the owner of an account (via Email, SMS, Google Authenticator, etc).
Proving we can unlock the phone is itself another measure. (Mostly just to prove we didn’t recently steal the phone.)

So why do so many of us who also use our phones for work, agree to trust our employers with complete control over our phones?

I recently wanted to access my work email on my phone, so I installed the email app, and it told me I must have an administrator allow the connection.
So I contacted my friendly local sysadmin, and they said:

You need to install the policy tool on your phone before we can allow you to access work email on your phone.

Fine. So let’s look at the permissions this “policy” tool is requesting:


Notably, the app wants these permissions:

  • Erase all data on the phone by performing a factory reset.
  • Change the screen lock.
  • Lock the screen.
  • Enforce storage encryption.

I understand these requirements from a security perspective; they are all administrative functions I would want to have control over, if I owned the phone – which I do. The problem is that the company who I work for does not own my phone. I will never trust them to the same degree that I trust myself.

When I spoke to my colleagues about this, they all said the same thing to me:

But the company will never actually _use_ any of these permissions. – They just want to be able to delete your email from your phone if it’s lost or stolen.

This completely misses the point.

I refused to place my trust in this app for two reasons:

  1. Apps should not be granted more permissions than they need to fulfil their purpose.
    If I agree to these permissions, then it has the power to use them, regardless of any good intentions.
  2. Whether I trust the SysAdmins of my company is irrelevant.
    If a SysAdmin in my company can control my phone – then I’m also entrusting this control to a string of black-box processes and procedures that I have no control over.
    I treat my phone security very seriously – *Nobody else* has the same motives to protect my phone, as I do.

If my company gets hacked – or if any one of the SysAdmin’s accounts gets hacked (and there are probably multiple Sysadmins that have the same access to control my phone) – then a malicious actor now has the ability to lock out my phone, or wipe it with no warning.

This may have the following side-effects:

  • Loss of personal files/photos stored on the device (assuming they aren’t all backed up to the cloud somewhere)
  • Loss of 2-factor login codes (because you don’t have a U2F device)
  • Loss of 2-factor backup codes (unless you keep them stored somewhere safe, and not in a text file on your phone)
  • Loss of other account passwords that you keep in an encrypted text file that you keep on the encrypted SD card in your phone (which isn’t in the cloud, for “security”…)
  • Inability to contact anyone (because you don’t actually remember anyones phone number anymore)
  • Inability to buy things (because you rely on Apply|Android Pay and no longer carry cash or cards)
  • Inability to use public transport (because you use an App for that)
  • Inability to control your house heating/lighting/door-locks (because you can’t get enough of those IoT devices)

But these issues aren’t limited to you. This policy is something that everyone in the company who wants access to their email on their phone, has to agree with and accept.

So if I’m a hacker and I’ve compromised just one SysAdmin account – I have the ability to wipe the phone of everyone in the company who has placed their trust in this app.

Does this include the CEO or board of directors?
Does this include all of the security staff?

Desired Outcomes

A malicious actor might choose to disable these devices for the following reasons:

  • To destabilise a company during a critical period of business, causing financial harm.
  • As part of a campaign to cause as much damage to the company as possible.
  • To inhibit security personnel from countering the actions of the malicious actor.
  • To restrict the short-term management of company stocks and shares.

General Motives

  • Individual victimisation.
  • Retribution against the company.
  • Competitor motivated or financed.
  • Foreign government de-stabilisation.

Victim / Target

The intended victim in this attack can vary a lot. As a user, I might intentionally be the only victim in a directed attack, but I could also just be one of billions of victims.

  • Single targeted user.
  • A group of users within a company who share a common function. (eg. Security Personnel)
  • The company itself (All users of the app in the company)
  • All users of the app (If the app or app company is targeted)
  • All users in a specific country (In a state-sponsored attack on a foreign government, as part of a de-stabilisation process)

Mitigation

  • Any app you require your users to install on their devices, should only have the permissions required to serve its purpose.
  • The ability to perform actions on users’ personal devices must be restricted to those who absolutely need it.
  • Multiple SysAdmins should be required to act as a “group action” to carry out a non-reversible process such as ‘wiping a device’.

Summary

As a company, we need to be less cavalier about what we ask our users to trust us with.

As an employee, we need to be more protective of our own devices, our data, and our privacy.

As system designers, we need to allow for multi-admin safeguards, to ensure any action that results in any action against a user’s device can only be carried out as the collective actions of an authorised group of administrators.

As system engineers, we need to ensure any actions that are carried out are recorded and audited, so that misuse of the system can be identified, reported, and investigated thoroughly.

office365 calendar link vulnerability

In November 2015 I noticed that Microsoft Office 365’s calendar sharing option used an HTTP permalink.

the calendar link shown in Office365 settings is a HTTP link, not HTTPS

I reported this to Microsoft, and they have since fixed the issue. The rest of this article was written before the issue had been fully resolved.

what is the issue?

Before December 2016, the “share calendar” link Office 365 was giving to users to access their calendars is a bog-standard HTTP GET request permalink to your calendar, which, depending on your settings, can be used by anyone who has the link, to access:

  • all of your calendar details
  • some of your calendar details
  • just your availability
  • or nothing

If, like me, you use your calendar for work, to organise and attend video-conferences, online meetings, or discuss anything sensitive, then you will probably at some point have URLs, usernames, passwords, conference IDs, VOIP call numbers and passwords, within the details of your calendar appointments.

If your calendar sharing option is set to “Full Details”, then anyone with the generated URL can get full read-only access to your entire calendar, and also access to all of these details.

The above is all expected behaviour, because you’re supposed to keep the URL secret – the problem is that the generated URL is not prefixed with HTTPS…. (at least it didn’t used to be.)

So all the standard MITM attack stuff applies – anyone with access to see any raw otherwise unprotected network traffic between your browser and the Microsoft Office 365 server will be able to see the content of this GET request.

Microsoft have even done 99% of the work required to fix this issue – when the link is accessed, the following happens:

shows the HTTP request being redirected to HTTPS

So the request is immediately upgraded to HTTPS using a 301 Moved Permanently response, with exactly the same URL content.

But the problem is that the GET request has already been sent over HTTP, which contains the secret permalink key (which I have redacted) that you need in order to access the data. Even though an attacker wouldn’t be able to see the second request being made or the content being downloaded over HTTPS, they could just follow the HTTPS redirect themselves to see the content, and then repeat that request when they want to see your latest calendar content.

There is no tracking available through Office 365 to see what devices are using this link, how often, where from, or anything of that nature. It also doesn’t give you the option of resetting the link in case you think someone else might have gotten access to it.

At some point between November 2015 and April 2016, Microsoft have changed the default links being generated, so they default to HTTPS – but this still leaves any existing links vulnerable, and Microsoft haven’t informed users of the problem (to my knowledge).

what can i do?

If the link in your calendar sharing settings begins with HTTP://, then do this:

  1. Disable calendar sharing completely
  2. Save the changes
  3. Re-load the options window
  4. Re-enable calendar sharing

Your link will now be different, and the old link will now be dead.

so what did Microsoft do?

They changed all new links to be HTTPS instead of HTTP, but they didn’t change any existing links, or provide a nice easy button to press to do this.

what does the content of an ics file look like anyway?

ICS files just contains structured plaintext. So your usernames, passwords, phone numbers, email addresses, dates, times and locations are all sent in a very easy to read format. Any typical off-the-shelf network traffic scanning tool will be able to easily pick out your sensitive information from the data for later abuse.