ASD Protected Classification for AWS Sydney

On the 23rd of January, 2019, Australian Signals Directorate (ASD) and AWS announced that the AWS Sydney Region (ap-southeast-2) was suitable of processing workloads rated at PROTECTED level DLM.

The primary place for this is a little green table on the ASD site that looks like this:

ASD Cloud Services February 2019
ASD Protected Cloud Status as at February 2019

AWS had previously announced on 28 March 2018 that public sector companies could self-assess AWS to PROTECTED level workloads; now that self assessment is no longer necessary. However a guide does exist (as noted by the asterisk in the above table) that there are conditions on doing this.

That guide is available to AWS customers via the NDA-enforcement system of AWS Artefact. The guide shows how to meet the PROTECTED level, and it should come as no surprise that using strong, managed encryption is a key part of this (pun intended).

There’s other reasons for Australian customers to look in at Artefact, such as agreements around compulsory breach notification, etc. These are specific to regulatory requirements in Australia.

So what services changed to reach PROTECTED

From what I can tell (and its been 4 years since I worked at AWS), none. The guidance that recommends using strong encryption (Key Management Service) appears to be dates some time in the past. With 42 services in scope for PROTECTED, its a big step up from the previous 6 or so that were covered under self-assessment.

A complete matrix of services in scope can be seen here.

It’s interesting to see just 4 services that have now got UNCLASSIFIED rating that are not also available under PROTECTED: Route53, Organi[sz]ations, Shield, and Trusted Advisor. If you were using DNS for storing PROTECTED data you’ve probably got something wrong with you, to be honest; while Organi[sz]ations and trusted Advisor don’t store your information — they configure other services and give you operational recommendations respectively.

UNCLASSIFIED goes world-wide

One thing that I hadn’t noticed, by was told, was that unclassified workloads could now be run in any of the AWS Commercial Regions (i.e., except China). I couldn’t find a reference to this, but it’s worth asking your local AWS team about for your workload.

This would mean that those workloads that were previously running in the Cloud that were not rated PROTECTED can now look again at things like S3 inter-region replication, multi-Region redundancy, and more. Things like VPC Peering between Regions for distributed fault-tolerance make this just as trival for VM-based services to communicate across the world.

This is particularly attractive for services that are only available in, for example, US-East-1, or where costs are cheaper (e.g., Storage).

My Favourite service for Protected workloads

So back to the long list of protected services, what’s my favourite top 10 items to choose from:

  1. IAM
  2. CloudTrail (but my preference is an Organisation CloudTrail these days)
  3. CloudFormation
  4. S3
  5. Lambda
  6. VPC, EC2, EBS, RDS, ELB (ok, thats 5, but they’re so tightly interrelated)
  7. DirectConnect
  8. CloudWatch & CloudWatch Logs
  9. CloudFront & Lambda@Edge
  10. DynamoDB

So why this lot? Well, I think with just this combination I can probably solve around 95% of all workloads, reducing the TCO and increasing reliability and security posture at the same time.

So the time has come. If you’re in the IT department of any public sector, at Local, State or Federal, you should already be working this out.

If you can’t figure this out, reach out. If you’re needing training, check out Nephology’s Advanced Security & Operations on AWS in person training course, available throughout Australia (just ask). We’ve had over 10 years continuous production use of AWS, with critical workloads.

CloudTrail evolution

At the end of 2013 (yes, 5 years ago already), AWS announced a new service, CloudTrail. It claimed to provide increased visibility into user activity for demonstrating compliance; it never claimed to be an audit log, but is as close as can be architected and not be in the critical path for API request execution.

From the start, CloudTrail supported immediate cross-account delivery of logs. These logs were thus untouched by the generating account — there was no user-deployed replication of data files between AWS Accounts, and thus no question of the user origin Account editing the CloudTrail logs before a separate security team got access.

You can see why this was a massive success. Oh, and the first trail per region was complementary, except for the traffic charges incurred to copy the logs from the originating Region to the destination, and for the actual storage of the logs after delivery (but the customer may chose a retention policy — see S3 Lifecycle Policies).

I wanted to demonstrate the impact of the design of CloudTrail has had in innovating over the last half a decade…

The Early Years

Initially CloudTrail would itself execute from an AWS Service team’s own AWS account — one per Region. When provisioning an S3 Bucket for receiving these logs, a customer would have to find the authoritative list of Account IDs to whitelist for S3:PutObject.

Each time a new AWS Region was being launched (eg, Stockholm), a new Account ID for CloudTrail’s service account would have to be discovered, and added to the destination S3 Bucket Policy.

Furthermore, CloudTrail was initially a per-Region service, so customers would have to scurry over every AWS account and define a CloudTrail trail in the newly launched Region; until this did this, the new region was effectively a blind spot for governance and compliance that read the logs.

So you can see in the above, with three accounts, and three Regions; we had to define CloudTrail 9 times! Lets look now at today’s 20 regions, and a customer with 20 AWS Accounts, and we’re having to defined CloudTrail 400 times. Luckily, there’s an API, and CloudFormation support…

The Middle Years: Per-Account simplification

These first two problems (Service Account IDs for the S3 Bucket Policy, and new-Region blind spot) were solved in time. I and many others contributed to the product feature request and Support Requests feedback to help shape this: the CloudTrail account identity was solved with IAM Service Principals, and as such, the service name “” now matches the CloudTrail service in every current and future (non-Chinese, non-US-GovCloud) commercial Region.

This bucket policy now looks like:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {"Service": ""},
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::myBucketName"
        {   "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {"Service": ""},
            "Action": "s3:PutObject",
            "Resource": [ "arn:aws:s3:::myBucketName/AWSLogs/myAccountID1/*", "arn:aws:s3:::myBucketName/AWSLogs/myAccountID2/*", "arn:aws:s3:::myBucketName/AWSLogs/myAccountID3/*" ],
            "Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}

Note: we still whitelist per accounts, so that other AWS customers couldn’t send their CloudTrail logs to our bucket — which would be weird, and potentially just incur storage charges to us.

And for the new-Region problem: Global Cloud Trail Trails were introduced, that would have a Home Region the Global Trail was defined in, but collect activity from ever other Region.Further improvements came in the way of cryptographically signed Digest files that would form a chain of files each of which contained information about the files that came before, providing an unbreakable chain of history such that the modification of a CloudTrail data file, or modification or removal of a previous digest file could be detected.

Now its a little more manageable; a customer with 20 AWS Accounts need only defined CloudTrail 20 times. With APIs and CloudFormation, there is a chance of consistency.

More and more services became capable of generating CloudTrail logs over the years, and the JSON logging format had a few modifications, which was beautifully handled by the version log entry format (currently has log entry versions up to 1.06).

And now with an Organisation approach

AWS Organisations is starting to build out more of a corporate approach to the decade long multi-account approach. A somewhat clunky Landing Zone service tried to make this a little more turn-key, but AWS Organisations is now starting to deliver on simplification.

With a Verified master account (which historically was your Consolidated Billing Account), you can now push a master CloudTrail definition to all subsidiary accounts. This is done once, and all subscribed account configure this trail. Furthermore, those subsidiary accounts cannot remove the enforced Organisation Tail while a part of the organisation.

Thus consistency is ensured, and a security team no longer has to scan these workload accounts to ensure that they are still logging CloudTrail, and logging it to the correct enterprise destination(s).

Thus combining this with cross-account logging, we end up with something looking a little like this:

Warning: Log Prefix has changed

However, the Prefix that CloudTrail logs to in this configuration has changed ever so slightly. Where previously we whitelisted individual accounts in the Resources part of our S3 bucket, we now have to whitelist a Prefix that includs our Organisation ID — but we don’t need to be worried about other (non-organisation-members) sending logs to us:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {"Service": ""},
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::myBucketName"
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {"Service": ""},
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::myBucketName/AWSLogs/o-1234567/*", 
            "Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}

What remains to be tested by my (at least) is creating a new account and seeing it get an Organisation Trail pushed to it upon create; adding an already-created account and seeing the same; and lastly removing an account from an organisation and seeing it not be able to log any more.

In Summary

Hats off to the IAM, CloudTrail, Organisations teams for making this all come together (as well as other service teams who have managed to get CloudTrail support into their products.

Customers will want to move to this, but three may be adjustment required for any analytics solutions reading the CloudTrail files due to the new location after implementing this change. Customers may chose to leave their existing CloudTrail in place for a few days after deploying an Organisation Trail in order to update those analytics services.

CloudTrail continues to be at the heart of Governance and Compliance assurance when running on AWS.

There’s much more to talk about here for the configuration of the destination S3 Bucket. If you would like to spend a deep dive with me, check out our in-person Advanced Security & Operations on AWS course. If you’d like this delivered in your city, please get in touch with us at Nephology.

Web Transitions and Compatibility

I have spoken previously of web protocol transitions that are currently happening: and the encryption layer, the HTTP layer (OSI Layer 7), and even the TCP layer. But I wanted to dive deeper on this, and speak about the benefits of starting the transitions, and the risks of not finishing them.

The IT industry is terrible at discarding the abandoned and obsolete technologies it once heralded. The Change Management and ITIL process, the traditional project management approaches that constricted velocity of change have given rise to the culture of not changing anything – avoiding the work of actually moving forward.

Technology Enabling new version Disabling previous version
Risk Advantage Risk Advantage
HTTP/2 None Faster, less bandwidth May exclude older browser, integrations None
TLS 1.3 Middleboxes (transparent proxies) poor TLS implementation (most have patches available) Faster (less Round Trips), More Secure (less supported older ciphers, some new ciphers) May exclude older browser, system integrations Reduce security risk
Security Headers Uncover poor implementation in your products! Client helps with security
Network (web client) logging Lots of network traffic, turns requests (read operations) to events (write operations) Discover issues affecting clients you didn’t cater for

What I typically see is operations teams that leave every legacy protocols and cipher enabled, no headers inserted, no modern ciphers or protocols.

Today I’ll take you down one of these rabbit holes: TLS Protocols, CIphers, and Message Authentication (MAC).

Protocol Transitions and backwards compatibility

The TLS conversation between a web browser and the server starts with a selection of the newest protocol that both support. At this point in time (Dec 2018), there are 7 versions: SSLv1 which was never used in the wild; SSLv2, SSLv3, TLSv1 all of which are now deprecated by PCI DSS 3.2 and should not be used; TLSv1.1 which was only the “latest” version for around 18 months a decade ago – a period when only one new browser appeared – Safari – which has had many newer versions since then that support newer protocols; TLSv1.2; and the very new TLSv1.3.

The first step to a transition for web service operators is to ensure TLSv1.2, and if available, TLSv1.3 are enabled.

Don’t panic if you can’t enable TLSv1.3 right now, but keep patching and updating your OS, Web Server, Load Balancers, etc, and eventually it will become available.

Your next stop is to examine your logs and see if you can determine the Protocol and Cipher being used. Standard “combined” Log file formats don’t record this, but you can add it in. For example, Apache defines one of its log formats as:

LogFormat "%h %l %u %t \"%r\" %>s %O" common

We can adjust this with the addiitonal detail:

LogFormat "%h %l %u %t \"%r\" %>s %O %{SSL_PROTOCOL}x %{SSL_CIPHER}x" commontls

And now any of our sites can be modified from common to commontls and we can see the Protocol and Cipher used. At this point, sit back for a week, and then review what Protocols were actually seen over that period. Doing a combination of cut, sort, or some perl:

cat access.log.1 | perl -ne '/\s(\S+)\s(\S+)$/ && $h{$1}++; } { foreach $val ( keys %h) { print "$val = $h{$val}\n" }'

You’ll end up with something like:

TLSv1.3 = 468
- = 16
TLSv1.2 = 28188

So we see only modern connections here (we can ignore the 16 non-matching lines).

Of course, you may also see our older protocols mentioned, so your question should be what to do now. if these all originate from a few regular IP addresses, then you possibly have an old client, or an old script/integration process, for example, running from Python 2.x, old Perl, Java 6, etc.

If that’s the case, then you have a conundrum; those old integration processes will prevent you from securing those integrations!  In order to maintain a secure connection, the client/integration will need an update. Move to Java 8 (or 11), Python 3.6 (or 3.7), etc. If that’s an external service provider or 3rd party, then it’s out of your hands. If you’re a paying customer, then it’s time to request that your provider updates their environment accordingly. A key phrase I love to bandy about is:

We've upped our standards; up yours.

Of course, you can always just disable those older protocols (perhaps after some notice if its an important integration). Nothing gets work moving quite like a deadline. “We’re turning off TLSv1 on 1 April – no joke; TLS 1.2 is our new minimum”.

If you’re setting up a new service today, I would strongly suggest only enabling TLS 1.2 and 1.3 from the start.; and over the coming years, make a conscious plan to schedule the deprecation of TLS 1.2.

I you only have one (or two) Protocols enabled, then as part of your operational responsibility, you only have to worry about one (or two) protocols being compromised.


Some providers enable almost every cipher under the sun. I have no idea how they keep aware of the vulnerabilities in all those ciphers. I prefer to minimise this down to the smallest, strongest set I can offer. Today, that’s AES in GCM mode (either AES 128 or AES 256). AES in CBC mode is deprecated (but its the strongest that MS IE supports on many Windows versions). Microsoft announced in 2018 that CBC is no longer considered secure. So your choice is to support MS IE (on older platforms), or be secure. Do your developers a favour, and drop MS IE compatibility.

New ciphers such as CHACHA20 are only available under TLS 1.3 are fine as well. But all the older ones, such as RC4, DES and 3DES, should be gone. As above, check your logs after you have sufficient logging enabled to determine if these are actively being used.

Key Exchanges

When keys are exchanged, these should always be done using ephemeral (temporary) keys. Your Cipher Suite soup should have DHE for Diffie-Helman Ephemeral, or ECDHE for Elliptical Curve Diffie-Helman Ephemeral key exchange. Anything with plain DH is using the same keys repeatedly, and should be disabled. Again look at your now-enhanced logs and determine if you can disable older Key Exchange algorithms.

Message Authentication and Check-summing

The last part of a cipher suite often has something akin to a message digest or checksum function, such as SHA, MD5, etc. The only ones that should be available today are SHA256, SHA384, and SHA512.  The larger the number, the more unique the checksum is, but the higher the computational costs.

Over time, newer checksums will come, but most major browsers don’t support higher than SHA512 at this time.

In Conclusion

Migrating the TLS Cipher Suite and Protocol are probably two of the most security-critical pieces that needs professional tuning to avoid being accidentally configured in a way that can be compromised. The  standard approach of enabling everything, or accepting vendor defaults, is a dangerous approach.

If you’re not confident with this, then read more, or join Nephology during one of our Web Security training courses for face-to-face help.

2018: the year that web security moved forward

According to @year_loading on twitter, we’re now:

▓▓▓▓▓▓▓▓▓░░░░░░ 62%

And there’s been some great advances in web security and capability that is finally putting a nail in the coffin of 25 years worth of web legacy. It’s been a long time coming.

The Early Years of The Web

I graduated high school as the class of ’93, starting a Bachelors of Computing and Mathematics at UWA in 1994. I wrote my first web pages in the summer of 1995 – content about the city of Perth (there was no Wikipedia), and by 1996 was being paid to carefully craft web content in two languages, English and French — content which is still online today.

Cascading Style Sheets were born (“W3C Recommendation”) in late 1996, and by 1997 I was lecturing about CSS to staff at UWA in my role as the university’s webmaster. JavaScript was starting out, and Sun was all about embedding their Java language into the browser as clunky, heavy applets.

It was the Netscape Navigator 1.1N release of 1996 that spawned the start of SSL for transferring the hitherto plaintext, unencrypted HTTP protocol, coupled with x509 certificates that looked like it could provide a distributed secure system over untrusted networks with a solid chain of trust. This looked like something that could potentially be used for some sort of transaction — perhaps as far as commerce.

It has now been nearly 25 years since that Navigator release; and as expected, the encryption technologies — as open as they were — are now relegated to the past. Or they should be.

The horrible middle years

Don’t let me dwell; suffice to say:

  • Java Applets
  • Fractured web browser ecosystem (Microsoft)
  • Flash
  • Digital Rights Management
  • Proprietary (closed) formats

…were all horrible. Slow, klunky, insecure, or just broken.

And today in 2018

The HTML mark-up language today is just as readable and renderable as it was then. Openness has preserved the history of the web – for content which has not been replaced or removed. Archivists decry a period of our history for which paper documents are declining, but open formats have outlasted proprietary ones and are still functional.

As an aside in 1999 I joined Debian as a (volunteer) developer; Debian itself today turned 25 years old.

Cryptography rules the world. We stand today where Google Chrome, which itself accounts for over 60% of all web browsers used by market share, reports that as of July 2018, 76% of traffic that its users consume is over an HTTPS protocol for Microsoft Windows desktops, and 86% for Apple Macintosh users.

Moore’s Law on computing power, and economics means attacks on now-‘historic’ mechanisms to secure content are now reasonable. I’ve seen attacks on GPG short key IDs (8 hexadecimal characters) for people trying to generate keys to impersonate others based on generating keys repeatedly until a short key ID match (the long key IDs were different). Attackers stuff comments into PDFs to bloat their size, but match their checksums.

What’s clear is that the majority of the IT Industry has become terrible at one thing: deprecating legacy. I see this with Java developers who ignore warning messages about deprecated methods. And I see this with web sites that turn on every possible combination of TLS (SSL) protocols, ciphers and checksums, despite the majority of them now being deemed insecure.

Transitions are hard. Here’s a list of some of the web transitions going on now to help secure, speed, or improve content or connectivity:

  • HTTP/2.0, replacing HTTP/1.1 and 1.0
  • HTTPS replacing unencrypted HTTP
  • IPv6 replacing IPv4
  • TLS 1.2 replacing all earlier versions, and itself about to be replaced by TLS 1.3
  • AES ciphers replacing RC4, DES, 3DES
  • GCM-mode based encryption ciphers replacing CBC mode block chaining
  • Elliptical Curve mathematics replacing RSA prime number factorisations for certificates and key exchanges
  • Stronger message digests such as SHA-2-384 replacing earlier SHA-2-256, SHA-1, MD5 and worse
  • Brotli compression replacing gzip and deflate
  • Angular, React, Bootstrap and other JavaScript frameworks replacing the Flash and Applets of the past
  • DNS Sec starting to roll out (come on, and Route53)
  • Browsers being able to actively enforce stricter policies around content and actions they take
  • SVG replacing bitmap formats for specific use cases
  • Java 8 replacing Java 6 and 7
  • Java 11LTS about to be replacing Java 8
  • Python 3.x replacing Python 2.x
  • NodeJS 8 replacing NodeJS 6 and 4.
  • Linux replacing Solaris and all Unix before it
  • Cloud replacing on site data centres, Co-Lo and traditional ‘managed services’
  • PaaS replacing IaaS + blood, sweat and tears
  • SAML and OpenID replacing LDAP

With all of these changes (and more) its hard to keep up. Some of these items are for SysOps people to fix, some are for Developers, yet all can be done by Full Stack DevOps Engineers.

I’ve been idle on web content up until about 6 months ago; frustrated with a lack of real innovation and cohesiveness, and no real way to differentiate ‘good’ and ‘bad’ configurations of all of the above. Sadly, many poor configurations of systems and solutions are masked by functionally working, even if they are inferior in speed, efficiency, cost,  or security.

For many decades governments have tried to move to IPv6, but have successively failed. ISPs fail to offer IPv6 to their customers, undermining the drive for the major transport protocol migration; work around upon work around has had to be devised. IPv6 traffic in Australia stands at 5% or so in 2018; and at no additional cost, many solutions can be deployed as ‘dual-stack’.

Managing these transitions while systems are live is interesting. But this is what motivates me.

We have a lot of web legacy. There is much to be done.

Full speed WiFi: Moving to UniFi

Finally, Internet speeds in Australia are outstripping the capability of 802.11n. We’ve been running an ISP issued router for some time, but I had been disappointed at the lack of security updates (for the Krack attack) by the vendor (my ISP’s now abandoned self-branded “Labs” hardware), and the limited speed 12 Mbit/s on WiFi was becoming annoying. Our local network houses various appliance on wired Ethernet, such as TVs, Set Top Boxes, Blue Ray Players, etc. But most of our online experience is via Laptop, phone, and iPads.

We’re in a modest building on a single level. The 802.11n footprint easily covered the entire property, and was housed in an cupboard towards the front door. The NBN FTTP Node is located in a tool shed behind the garage, along side a switch cabinet containing a 24 port patch panel and a Gigabit switch, reticulated to just six ports inside the building.

The existing topology patched NBN to that cupboard, where it went into the existing All-in-One Router; and from three it patched back to the tool shed, and onwards to the rest of the building.

We grabbed a UniFi 8 port managed switch with 4 of the ports able to do Power over Ethernet (to replace the 8 port unmanaged, non-PoE switch), along with the CloudKey for unified management, and the NanoHD Access point. The one missing device is the Security Gateway from this combo – only because the supplier was out of stock (for a month!).

We unboxed the equipment, and switching the switch in the tool shed was a trivial plug and play experience. The Cloud Key plugged to one of the ports, and within minutes we were able to log into the controller (The Cloud Key device), ‘adopt’ the switch, and ensure that all firmwares were updated.

The Cloud Key offers SSH as a service, and with authentication I was able to log in. I was very pleased to find myself at home on a Debian system (having been a Debian Linux developer for close to 2 decades). But that was very much poking under the hood — normal operations does not require this, and I would imaging the majority of customers need never know.

With the AP plugged in and configured with a temporary new SSID, we initially found intermittent connectivity issues, but after moving patch ports this stablised; I can only put this down to the age and quality of the Cat-5 based Ethernet and the patching we did a decade ago.

After a few days of testing, it was time to go ‘live’; the existing ISP router had its WiFi disabled, and was physically relocated to the Tool shed where it can terminate the NBN connection, and connect directly to the Unifi Switch. The Nano HD AP now sits patched via the patch panel inside where the old ISP router used to sit.

As there were a number of wired devices plugged into the back of the old router in the cupboard, and unmanaged switch that was previously outside has relocated in side.

The UniFi interface does give a nice visual topology of the devices it can see; and in this case, it cant see the unmanaged switch; hence two devices are on the Home Switch port 1.

Thus far I am pleased with the deployment. Its definitely not cheap equipment; so far we’re looking at over AU$500, and when the retailer has the Security Gateway in stock, we’ll look to get that too (another AU$150 or so).

So a random mid week test before midnight now shows:

Our next tests will be to run separate WiFi networks for visitors, and limit times of operation, and channel them to separate VLANs — after the Security Gateway is in place. It neatly ties together turning on the trunking of links from unencapsulated vanilla 802.3, to supporting multiple VLANs; across the various managed APs and the switch ports they are plugged in to, and between the switch and the Security Gateways.

We’ve also been playing with the UniFi app in android, and remotely viewing our network. There’s more experimentation to come, but thus far, its got approval from the team here.

Our thanks out to Troy Hunt for his excellent explanations.

Please review your crypto

We find we are speaking regularly to people to review their currently configured web transport security configuration.

Its one part of the OWASP top 10: to ensure that the data being transported across the untrusted networks of the Internet are adequately encrypted. The configuration of your web server, and the responses you have (possibly mis-) configured it to give out to anonymous users across the internet may be disclosing vital attack information that helps compromise your systems, and at best, shows a lack of strong policy or understanding.

First and foremost, if you’re not using HTTPS, but the plain-text HTTP, then you’re about to get warnings on your site as being “Not Secure”. Its been present on form submission pages for a few months on Chrome, but now that messaging is going out for all unencrypted web pages.

Why not take 3 minutes and visit and submit the URL of your company web site. Then repeat this for your company’s AD FS or single sign on service, or Webmail service. It will take about 2 minutes on each, and we can thank Qualys for sponsoring this while we wait…

Now scroll down to the first section, “Summary”, and look for any red or orange bars on the page showing some pretty straight forward warnings. You may have a certificate from one of the previously Symantec-operated or owned Certificate Authorities which, although expiring some time in the future, are actually going to stop working for a large majority of users from September. Better action that ASAP!

Next, scroll down to the section that says “Configuration”: if it shows only TLS 1.2 as being enabled, and everything else disabled, you’re probably doing well.But you’re not Done yet.

If you also have TLS 1.3 enabled, then well done (this version is quite new).

But if you have TLS 1.1, or older, then its time to turn that off. And for some of those, that time to turn off was actually back in 2014. But I understand, you’ve been busy.

Now drop a little further to “Cipher Suites”. You should have a note saying “suites in server order preference”, if not, then you should enable that on your server. And finally, ensure you have a “cipher suite” something like TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 very near to the top of our ordered list.

Let me break this down:the cipher suite normally goes something like:


The Key Exchange: ECDHE is good, DHE is average, and DH is bad. Get rid of DH and DHE.

Certificate types: depends on the certificate you generated. Most people use 2048 bit RSA based certificates, but longer RSA is possible (and slower), and the newer ECDSA based (elliptical curve) Certificates are coming soon. RSA is fine for now.

Bulk Cipher: Today, AES 128 or 256 (doesn’t matter which) are considered OK. 3DES, RC4 are really bad. Just leave the AES ones enabled.

MAC: If it just says “SHA”, its bad – SHA-1; if it says MD5, then is terrible! SHA256 and SHA384 are both currently acceptable.

Of course, that’s for today. Last week, we could have left CBC based ciphers in there, but then research from Microsoft showed that’s now not considered secure. But in removing the last of the CBC chain mode suites, you’re probably going to stop MS IE 11 from being able to connect.

There’s other tools (Mozilla Observatory), and a lot more places to look — such as the HTTP Headers than can help secure content, and improvements to HTML as well — that should all be reviewed.

Nephology has been securing ecommerce web systems globally, for commercial and government organisations, since 1995 – from the birth of the web. Contact us for more information.

CBC-mode no longer safe

On 12 June 2018, Microsoft announced Timing vulnerabilities with CBC-mode symmetric decryption using padding. You may not have noticed, but the implications are interesting.

Microsoft’s own web browser story has two characters: the venerable Internet Explorer (IE, or MSIE), and the new kid on the block, Edge. The Edge team is working to try and keep up with the agility that its most active competitors display: Chrome, Firefox, and Safari. However, MSIE does not seem to receive much in the way of new features or even maintenance of existing features. Yet MSIE remains a default browser for a large number of slow moving Enterprise and Government organisations, due mainly to the fact they are still running Windows 7, and have these (typically older) versions of MSIE installed by default.

After 20 years of cryptography on The Web, there is a lot of legacy; the danger is that some of this legacy which was once start-of-the-art is now downright insecure. During this 20 year period (since Netscape Navigator 1.1N brought RSA crypto and x509 to the web), vendors and producers of technology solutions have sought to support newer protocols, ciphers and message digest algorithms, but have not widely removed support for this legacy. This has been coupled with a general lack of understanding by the administrators of web sites and HTTPS terminating firewalls/load balancers, and a lack of agility of the enterprise to appropriately maintain the tools and services they provide to thier staff and users.

Web Crypto Transitions

Let me break this down into several areas of web security that are being maintained:

  • TLS Protocols (how we will negotiate what encryption to use)
  • Key Exchange mechanisms
  • Bulk Ciphers
  • Cipher Options
  • Message Authentication Code (MAC)
  • Certificate chain-of-trust signing Algorithms
  • Certificate key Algorithm

All of these are being replaced over time, and likely will be replaced REPEATEDLY in future, spurred on my discoveries of vulnerabilities in the techniques being used, or simply the declining cost and complexity of brute forcing solutions to these.

Until now, the mere presence of a web browser “green padlock” has hidden the sins of poorly configured web sites.

TLS Protocols

From 30 June 2018, PCI DSS 3.2.1 requires that cardholder environments no longer support ‘early TLS’, meaning TLS 1.0 or older (SSLv3, SSLv2). This leaves just TLS 1.1 and 1.2 available to most, with TLS 1.3 only just starting to become available (see the <a href=””>IEFT Status</A> for a timeline of its work).

TLS 1.1 was itself released in RFC 4346 in April 2006; 1.2 was released in RFC 5246 in August 2008. During that 28 months, only one Firefox and a handful of Chrome releases had TLS 1.1 as its highest support protocol version. Both of these browsers have had many, many releases in the ensuring DECADE, so for all intents and purposes, most web site providers should be in a comfortable position to ALSO disable TLS 1.1 (but check your web logs first).

MSIE 11 does support TLS 1.2; however for older versions of MSIE, support is either disabled by default (simple option to turn it on), or not available at all. Java 6 updated 141 does support TLS 1.2 by default, but not necessarily the modern key exchanges, ciphers and MACs).

Key Exchange Mechanisms

Asymmetric Encryption is used to exchange information between parties over untrusted networks. This is a computationally complex process (slow), so is used to share the names and keys that are used for symmetric bulk encryption.

  • Straight Diffie-Hellman (DH) key exchange was the first stab at this, but it used the same private key for all exchanges.
  • Diffie-Hellman Ephemeral (DHE) was the fix to this; a new key used each session.
  • Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) now uses elliptical curve mathematics to speed this up.

However, MSIE does not support ECDHE, and neither does Java 6 out of the box with the default cryptographic provider (you can look to replace with, say, Bouncy Castle).

Bulk Ciphers

Bulk symmetric ciphers use the same key to encrypt and decrypt, but are generally fast. Today we seem to have settled upon using AES 128 or AES 256; newer ciphers such as Whirlpool are around, but not widely implemented server & client side, so not used.

Cipher Options

The biggest impact of the above MS announcement is that CBC – Cipher Block Chaining, is now deprecated. In its place stand Galois Counter Mode (GCM). GCM is not (at this time) supported by MS IE, or Java 6 default JCE.


SHA-1 based MACs have largely been dropped today, replaced by SHA-2-256. However, some organisaitons are now raising their minimum to SHA-2-384, which is beyond the capability of MSIE and of Java 6.

Certificate Chain of Trust

Over the last few years, the signing algorithm used to confer trust from a Certificate Authority to a subject’s certificate has moved from MD5withRSA, to SHA1withRSA, and now sits at SHA256withRSA. This remains to be well supported.

Certificate Key Algorithm

The certificate that sites present is currently based upon the RSA signature algorithm, and using a key length of 2048 bits. While this key length can in future be made longer, it gets much slower.

In its place will be newer ECDSA based keys. Its possible that over time, servers will have both RSA and ECDSA certificates at the same tie, and will serve the one based upon the client’s expressed cipher suite preference during connection establishment.

CBC-mode no longer safe

Circling back to the CBC mode for AES being deemed no longer safe, what does this mean. For sites that take this seriously, it means moving to only GCM based block chaining. For many, that means the end of supporting Microsoft Internet Explorer, as well as any system-to-system integrations that run from a Java 6 environment.

That’s too bad, but its not like those Java 6 services haven’t had since 2011 to move to Java 7, or since 2015 to adopt Java 8. We’re now at a point where good security practice must force these environments to comply to current requirements; and organisations that apply an Agile methodology of frequent, small updates and continual True Maintenance (ie, not just application functionality updates, but underlying VM/JVM/runtime environment updates).

We recently heard of one of Australia’s Big Four banks start to ask its customers to increase their protocols, ciphers, chaining options, and even MACs.

While TLS 1.2 is reasonably well supported, even on old Java, the move to GCM and SHA384 MAC was well beyond what Java 6 can do out of the box. Its a strong move, but need to be well communicated with clients, probably 6 months in advance. Luckily, this bank regressed to continuing to support CBC and SHA256 MAC, but with this news, CBC may once again be on the chopping block soon.