News, comment and opinion

This page is for items that are likely to have transient interest. Some of these posts have already been worked into articles - hence the date gaps between them.

Can test-driven development lead to bad design?

Crispin – 9 May 2018

I'm in the process of a major rewrite of CazMiranda, our in-house website and content management system. It was initially written portal-style to handle all aspects of the the 30-odd websites and 100+ domains that we used to manage. Over time it grew organically to become a large monolithic website aka big ball of mud. Although it's worked well for us, it now needs an overhaul partly because I structured things with SQL Server that were acceptable with our own hosting but don't work in the cloud (Azure). I'd also like the option of Linux hosting on a Raspberry Pi.

In the past I've used a bit of test-driven development (TDD), but never fully embraced it during a project's life cycle. Since I'm breaking up CazMiranda into a set of 'bounded contexts' that fit APIs it seemed a good opportunity to give it a more comprehensive try. The point of a bounded context is that it's independent and small enough to be able to understand it thoroughly. Examples would be an image processing module, an XML sitemap generator or domain management. Another important aspect is that each bounded context manages its own data and, if necessary, database.

The APIs that I'm building are all called via HTTP, so in essence each bounded context has its own little website, though serving data rather than pages. I started writing unit tests for these APIs, but quickly found that you could make something testable, but it wasn't necessarily producing a useful design for complex systems.

The usual alternative to unit testing is to use integration testing because this exposes the complexity of the implementation. The difficulty with integration testing is that you (usually) have to set up a test database for each test run. This is slow and running tests is slow because of the database access latency. However, recently I've found a way with ASP.NET Core 2 to swap out the SQL Server database for a Sqlite database that runs in memory. This gives the benefits of integration testing at unit testing speeds. Although it's a work in progress, this seems to be a better way of doing things.

In summary, TDD has its place but shouldn't be treated as a panacea. I've ended up with four testing modes: TDD, in-memory integration tests, integration tests with a test database and finally user testing of the finished article.

Changing the website URL to publish in Azure web deploy

Crispin – 29 March 2018

I was doing some routine maintenance of our websites today with particular regard to converting links from http:// to https:// wherever possible - the reason is that Google seems to be waging war on 'non-secure' websites. In so doing I found that the WebDeploy.pubxml* file that configures our Azure deployment contained a setting SiteUrlToLaunchAfterPublish. For those of you that use Azure, you will know that after publishing, the website is launched at its azurewebsite.net address. By changing the SiteUrlToLaunchAfterPublish value to your https address, you can browse the 'real' website more quickly.

Setting LaunchSiteAfterPublish to false in the pubxml file will turn off launching the website after publishing altogether.

*From the website root, the pubxml files are found in the Properties > PublishProfiles folder. For Visual Studio users, follow from the website itself, to Properties and then to PublishProfiles.

Reducing friction for automatic emails

Crispin – 27 March 2018

Many organisations have a need for automatic email systems eg to send notifications after a transaction or even just plain old newsletters.

I use Outlook for mail and, by default, the privacy settings block email addresses it doesn't recongise. This means that the pictures are not displayed automatically and I have to check the address and in particular its domain to see if I like it. Even as a relatively expert user, I'm wary of domains like mail110.sea91.rsgsv.net or mail58.suw91.mcdlv.net.

And it doesn't matter if the address includes on behalf of and other pointers, I have to think (always a bad thing for a user) and decide whether or not to continue.

Sure, I could whitelist the domain, but since this domain may be being used by a number of organisations, with the risk that one of them might be dodgy, I never do that.

There is a better way

If you don't want to use your own domain to send bulk emails, then you can use a secondary domain with a bulk mail provider like Mailgun or Sendgrid. We use Mailgun and our mail volumes are so low we've yet to incur a bill greater than £0.00.

We followed the Mailgun instructions and created a number of CNAME, MX, TXT DNS records for our sub-domain tx giving us a full name of tx.caz.ltd.uk. We chose tx in this instance because the application was transactional emails. For newsletters (which we don't do), we might choose an additional subdomain to keep the reputations of the two names separate.

We send emails via the web API provided by Mailgun because this is convenient for us. There are alternatives if necessary.

I have no problem whitelisting secondary or sub-domains because the last part ties it to the organisation's website, etc. I'm vastly more confident that someone in that organisation will have configured this arrangement.

Serving KML and KMZ files in Azure

Crispin – 19 August 2017

Having run my own webservers for a couple of decades and been in total control of the MIME types (aka media types) that could be served, I was caught out after we moved our websites to Azure. KML and KMZ files used by Google maps for map pins and overlays are not served by default.

To fix the issue, I had to add a staticContent element to the web.config file under system.WebServer:


 <system.webServer>
    <staticContent>
      <mimeMap fileExtension=".kml" mimeType="application/vnd.google-earth.kml+xml" />
      <mimeMap fileExtension=".kmz" mimeType="application/vnd.google-earth.kmz" />
    </staticContent>
...

After publishing the site with the revised web.config, my overlays re-appeared.

An Azure future

Crispin – 15 June 2017

We've always run our own web servers for hosting since 1998, but since the current ones are now 9 years old and due for replacement we're changing our approach. It's going to be much less work and less risky - in the long-term - to move all our websites to the 'cloud' in the form of Microsoft Azure. In the short-term we have to sort out the migration and this invovles making the technology for all the websites consisitent.

We've already moved our DNS service from our (highly recommended) co-location host Hub Network Systems to dnsmadeeasy. The next step is to migrate the CazMiranda databases to Azure, swiftly followed by the websites. Once we're happy that it's all working, then the DNS switch will be flipped and we can retire the old servers.

Although all the websites will be consistent after 30 June 2017, they won't be using the very latest technology - ASP.NET Core - which would be more appropriate for Azure. It's taken 6 months to get to this point using the older .NET 4.6.2, so transitioning to .NET Core is a job for another day.

Is the future serverless?

Crispin – 11 November 2016

In the past, website hosters like us had to run their own servers with all the IT hassles that entails. The future is starting to look different because of 'container' technology like Docker. Software containers are like shipping containers for applications. You put your application in a container and it can be moved around and/or scaled. This means that it's possible to run a website on premises on something as small as the credit-card-sized Raspberry Pi, a NAS box, an server or off-premises in the cloud.

The cool thing is that it should be possible to move the container between these devices, though the DNS system would have to keep up.

So is it really serverless? All software has to run on hardware and if this hardware is on-premises (even a Raspberry Pi) then you have a server to manage. On the other hand, if you are running in the cloud, then the responsibility of managing the server is certainly reduced and maybe eliminated. Let's hope.

Have your credentials have been compromised?

Crispin – 25 September 2015

Troy Hunt is an Australian web security specialist, speaker and trainer who has an interesting blog and related website haveibeenpwned . From this I’ve discovered that, along with 152 million other people, my Adobe log-on credentials have been compromised.

If you too try the haveibeenpwned website and you find that your credentials have been compromised, then it would be a good idea to at least change your password for the appropriate account. You may be able to delete the account if you don’t need it anymore, but some website owners are very reluctant to actually do that. But it’s also possible that their PCI-DSS validation may depend upon not deleting absolutely everything.

Smartphones overtake laptops as preferred internet device

Crispin - 6 August 2015

If anyone needs a reason for converting their website to be mobile friendly, this is surely it. Smartphones have now overtaken laptops as the preferred method of getting online in the UK.

My own mobile data use has been increasing a lot recently since my SIM contract is essentially unlimited. It means that I make use of otherwise dead time at airports, on trains or as a car passenger. I'd become concerned about the security aspects of logging on to wi-fi hotspots all over the place. The mobile data contract took away much - it could never be all - of the worry of third-party networks.

Website performance is the next issue to solve

Crispin - 14 April 2015

After the so-called Google ‘Mobile-geddon’, the next issue for websites is going to be their download peformance. In truth, it’s always been an issue but high-speed broadband has masked the problem. However this problem has now reappeared on 3G and 4G mobile connections because the device user may not be getting anything like headline connection speeds. Coupled with this is the fact that people may be paying by the megabyte for data.

Template-based websites (eg WordPress) are code-heavy and tend to make a lot of external references, all of which slow things down. This could (and should) change if the template-based website developers tighten up their code, but it’s quite likley that a lot of website owners won’t upgrade. This opens up an opportunity for those website owners who have or move to performant websites.

Google will be regarding mobile-friendliness as a ‘ranking signal’

Crispin - 16 March 2015 and 26 March 2015

In a blog post titled Finding more mobile-friendly search results on 26 February 2015, Google announced (down the page) that if your website is not mobile friendly then it will be downgraded in the search rankings when the search is conducted on a mobile device. The rollout blog post (see references below) made things a bit clearer.

What does this mean for website owners?

First use the Google mobile friendly test to see if your website passes. If it does, then you will not lose ranking because of this ‘signal’.

On the other hand, if it fails, check to see how much mobile traffic your website received in the web statistics. If it’s a lot and search engine ranking matters to you, then your website will certainly need to be recast as a responsive design. Retrofitting a responsive design to an existing site is sometimes possible, but if the design is more than five years old there’s a good chance it uses a table-based design that won’t be easy to convert.

A better solution may be to take the opportunity to reconsider your marketing objectives in these days where individuals use a range of devices to view your website and also review what’s worked and what hasn’t in the past. Armed with that, go for a website redesign.

References

It’s harder to multitask on a smartphone than a desktop

Crispin - 4 March 2015

The problem

Smartphones are best suited to single task jobs like reading email or checking up on social media. It’s much harder to do something that requires information from multiple sources to be contrasted eg checking flight options from multiple airlines and then correlating that data with train and bus timetables. It’s at times like these that big desktop monitors come into their own.

Even though smartphones are getting bigger (ie phablets) the apps that run on them are single page applications (SPAs) so you can’t arrange each app in a window to suit the task in hand. What this means is that the user has to remember (or note down) all the information that they need to make a decision.

What does this mean as a website owner?

Don’t forget the desktop version of your website, particularly if users are likely to want to scale the page. There has been a rush towards ‘mobile first’ website development which has sometimes meant that the desktop version is just an oversized version of the mobile site. Think more clearly about your users’ needs so that they don’t have to.

We now spend more time communicating than sleeping

Crispin - 17 February 2015

According to this Ofcom report, the average UK adult now spends more time using media or communications (8 hours 41 minutes) than they do sleeping (8 hours 21 minutes - the UK average). It also appears that some six year olds claim the same understanding of communications technology as 45 year olds.

Clearly trends in digital capability are going to affect how people view your website, particularly whether it does enough for them.

Securing all websites - for free!

Crispin - 27 January 2015

Google has given notice that it's going to give brownie points to websites that use a secure certificate. For smaller websites this would have been a bit of a problem because of the cost in time and money of acquiring and installing a certificate.

Fortunately help is at hand in the form of Let's Encrypt brought to you by some big names in the industry. It's providing secure certificates for free. It will be available from mid-2015.

References