Are spam filters damaging your cash flow?

One worrying trend we’ve noticed in recent months is the increasing likelihood that our customers’ spam filters catch our monthly invoices, either sending them to the oft-ignored spam folder, or rejecting them outright.

Needless to say this is concerning because our customers either won’t know their credit card is being charged (if they’re on our auto-bill system) or simply won’t know that payment is due; risking suspension of their account.

Intuitively it makes sense that spam filters would attach a high spam score to invoices & payment requests, as these sorts of documents very often feature in spam and phishing attempts.

So, what to do about it?

Our experiments revealed that nearly every major spam filtering system is substantially less likely to classify an email as spam if it originates from a well known, reputable mail service such as GMail, Yahoo Mail, Hotmail, etc.  The identity of the originator is determined by IP address rather than the unreliable From: header.

Bear in mind that your web server either has no reputation value at all or – worse – has an IP address that was previously leased to less scrupulous operators. As availability of IP addresses becomes tighter you can certainly expect that the IPs attached to your shiny new server have been used by numerous websites & servers before reaching you.

We’ve experienced this situation a number of times – as we maintain a large number of servers in physically disparate locations (hence on different networks) and need to ensure email alerts can be delivered from all of them.

Google to the rescue!

At this point a possible solution becomes clear – route important email via a well known and reputable service to improve its chances of successful delivery.

We’ve been trialling this by utilizing the SMTP relay service provided by Google Apps Premier Edition – which powers all email destined for the wormly.com domain. It’s dramatically improved the situation for us thus far, and provides the additional benefit of archiving all web server outbound email within GMail.

To assist if you’d like to try something like this, I’ve posted a howto for configuring Postfix to relay via GMail’s SMTP service.

Filed under: Servers,Web 2.0,Web Services — Jules @ 3:56 pm - November 11, 2008 :: Comments Off

Got Great Uptime? Tell The World!

Don’t be shy – your customers really want to know just how reliable your service is. So go ahead and brag about it with our Public Uptime Reports.

Uptime badges

When enabled, you can place one of our funky uptime badges on your site showing uptime from the previous 24 hour, 7, or 30 day period. You can also link through to a detailed uptime report where visitors can examine your uptime history on a yearly, monthly, or daily basis.

Take a look at this example – and click to see the full report:

Uptime verified by Wormly.com

It’s a great way to show your customers that uptime is important to you – Could this transparency be your edge over the competition?

Filed under: Improving Uptime,Meta,Servers,Web 2.0 — Jules @ 3:10 pm - August 1, 2007 :: Comments Off

Variance: Don’t let it kill your AJAX app

You might be surprised at just how variable the HTTP response times are in your web application. Take a look at this 24-hour example:

HTTP Variance

Crucially, the variance in this example is caused by the application response time, rather than the network. That’s the blue Exec component, not TCP or Transfer components.

Variance-of-latency isn’t a huge problem with traditional page-refresh-response websites and applications, but certainly does present an annoyance to your users. When your app starts to get cleverer and offers AJAX goodies, however, the problem becomes more serious.

Sometimes, your user clicks repeatedly in futility, wondering what’s going on and why she is getting no response. Other times she’s not sure if anything is working at all.

So our Happy User rapidly becomes a Sad User. Click… wait… wait. She’s not feeling so empowered by your application at this point.

A more problematic scenario is that out of sequence AJAX responses will break your UI. Many developers using mainstream (read: simple to deploy) libraries fail to code precautions against this.

And it’s easy to see why: By now, most AJAX-happy developers are aware of latency issues, and latency is quite simple to emulate in a test environment.

Variability-of-latency is not getting enough airtime – most likely because few people are actually measuring it as a part of their build process.

It’s not the network.
Raise the issue with the developers and they will probably start a delightful discourse on the “best-effort” nature of internet pipelines, asymmetric routing, and similar vagaries of internet infrastructure. The implication being that the user is at fault because they chose to use your app from a free wifi hotspot in Turkmenistan.

Our graph above, however, shows that the fault lies squarely with the application being unable to offer consistent response times. HTTP network overhead is just a tiny fraction of the total – and runs at a consistent 89ms anyway.

The lesson is: Fully understand your application performance and work to improve its consistency, particularly during peak periods. The underlying network is rarely to blame.

Filed under: AJAX,Server Performance,Web 2.0 — Jules @ 9:08 am - April 23, 2007 :: Comments Off

Why You Should Limit Customer Choice

I’m confronted with the purchase of a new laptop, and am wracked with indecision. A common problem, perhaps? It’s easy to see why: With such a bewildering array of options available I simply can’t be bothered – or don’t have sufficient information – to make the choice.

12, 14, 15, 17 inch screens. 2, 3 4kg weights, battery life options, processors, RAM, all the other specs that I’m not interested in studying. It’s all there. For me to decide.

Intel Core Duo T2250 vs Intel Core 2 Duo 5500? Even stating CPU clock speed has become passé. In the good old days we had MHz and GHz.

I don’t even particularly want to use a laptop, but upcoming travel engagements dictate it as a necessity.

We constrained Wormly users’ choices.

optionsAnd it did a world of good. Until quite recently, Wormly customers were presented with pricing for every nuance of the services we offer, and they could use as much or as little as they liked.

It seemed a brilliant idea at the time, to offer no more and no less than what they needed (wanted?), and to make sure they don’t have to pay for stuff they can’t use.

Brilliant, except that it ignored a fundamental principle: That customers rarely know what they want.

All they know is that they have a problem – and it’s up to you to present the right solution. By splicing up our services into 4 distinct product offerings that appeal to 4 unique customer profiles, we’ve drastically simplified the buying process and – quite unsurprisingly – substantially improved our lead conversions.

Is it easy to buy your product?

Filed under: Marketing,Sales Process,Web 2.0 — Jules @ 8:45 am - April 19, 2007 :: Comments Off

Never Offline

A blog hosted by James Peterson, director of insights @ Wormly

On a semi-regular basis James will be trying to demonstrate that website infrastructure really is an exciting topic, and that your users really do care about the uptime & speed of your website.