icon
Blog

These blog posts are maintained by seeDetail employees. There is a technical blog on testing written by Daniel Cottrell, and a another blog on wider issues surrounding testing and IT written by Chris Neville-Smith.

Posts from all blogs are collated here.

3 March 2013, 3:02 pm

Who needs 1984 when we’ve got Foursquare?

Online snooping is getting worrying – but if we want to stop this, we must ask some fundamental questions about social media.

The next poster in the series says "Facebook is privacy"

When George Orwell created Nineteen Eighty-Four and Big Brother in 1948, he could scarcely have imagined the future. Not so much the nightmarish vision of the Ministry of Truth, Ministry of Plenty, Ministry of Peace and Ministry of Love, but two things he would never have guessed. Firstly, the emergence of god-awful reality TV show Big Brother (and all the other god-awful reality TV programmes it spawned), and secondly, a load of persecution complex-ridden Middle Englanders who says “It’s just like 1984” every time they get a speeding fine. I suppose some bits bear resemblance to the book, but that tends to be things like petty council officials invoking anti-terrorist laws over littering. All in all, it’s a bit of a damp squib.

But fear not, Mr. Orwell, all is not lost. Recently we have seen the arrival of a new program called RIOT (Rapid Information Overlay Technology). This little device uses information from social networks to track the movements of individual people. It is suggested this could be used as ways of monitoring people who are about to commit a crime – cue analogies to Precrime in Minority Report – but just like its ficticious counterpart, there are serious questions of how reliable this would actually be. Certainly there’s not much enthusiasm from the Police. Which makes me think the key market might be employers. Like a retail manager who wants to know if his staff are shopping at competitors. Or a civil servant checking which pesky underlings attend opposition party meetings in the run-up to an election. This could be fantastic news – if you are a control freak with lots of money and power.

There is just one small but crucial difference to what Orwell had in mind. The subjects of Oceania were forced to be monitored day and night in everything they do, through cameras, curfews and spies. RIOT, on the other hand, runs entirely off information that its unwitting subjects quite happily stuck in the public domain. Love your Facebook status updates? Can’t live without your Tweets? So does RIOT. All this information about where you are and what you’re doing is most useful, thank you very much citizen. Better still, why not take information from Foursquare, a service that makes it trendy to reveal your location as often as possible. Who needs “Big Brother is Watching You” when you can say “Hey there, are you going to put all your private information online like the COOL KIDS do, or are you a LOSER?”

This is not the first time someone was written an online snooping program that uses publicly-accessible information. Previous examples include “Please Rob Me”, to inform you, me, and any local burglars which houses are empty, and sex pest-bonaza “Girls Around Me” showing you the location and physical appearance of females nearby.[1] I should point out that these programs were both written to prove a point – albeit in a highly irresponsible way – but that’s little consolation for anyone affected by this. The Inner Party must be kicking themselves they never thought of this.

Now, as someone with no Facebook, Twitter or Foursquare account, it would be easy of me to scoff and tell everyone affected that they brought it on themselves. But the reality, I think, isn’t quite so simple. This is an issue that I think can only be addressed with some fundamental far-reaching questions about social media.

The problem is that, for many people, social media is now effectively compulsory I have lost count of the number of people who say they’ll Facebook me, as if this is the only way you communicated with people nowadays. (I mean, haven’t these people heard of e-mail?) I personally think that friends who won’t stay in contact if you’re not on Facebook aren’t worth having as friends, but I have a choice of friends who aren’t so obtuse. Other people don’t. This is especially a problem amongst teenagers where invitations to parties and the like are now exclusively given through Facebook – and habits made in teenage years can persist for a long time. And that’s just individuals. If you’re a business, or you’re self-employed, woe betide you if you’re not signed up to Facebook, Twitter, Linkedin and mysociallifesbetterthanyourssothere.com.

Once you’re signed up, social media sites have a very poor record for privacy. Oh, they’ve got an excellent record in producing privacy policies – it’s just that the typical privacy policy roughly says you don’t have any. The reason I left Facebook (apart from endlessly getting contacted by people I was quite happy to have lost contact with) is that I got sick of all the times the site pestered me to add more and more personal information about myself. Facebook’s claim to privacy lost any credibility when they started sharing information with friends’ friends without asking you. Bear in mind that at least one of your Facebook friends is probably trying to break with world record for most Facebook “friends” they don’t even know; so this makes Facebook about as private as announcing your next relationship breakdown with a skywriting plane. I know there’s all sorts of opt-outs available in social media, but the combination of apathy and confusing configuration settings renders this largely ineffective. As for safeguards against combining information from different social networking sites to form a highly intrusive profile of you – forget it.

Normally I would argue that privately-owned companies should be able to do what they like. But the very nature of social networks makes sites such as Facebook and Foursquare virtual monopolies. And as private monopolies, they have a lot of power but very little responsibility. Foursquare cannot credibly blame third-party apps for using public information they’ve collected, neither can Facebook credibly blame its users for handing over private data they encouraged them to reveal in the first place. We need a serious debate about where social media stands in an increasingly lawless privacy-disregarding internet. For what it’s worth, I think social media should, at the very least, operate information sharing on a strict opt-in basis. And if any users wish to share their information at all, they should make it absolutely clear what this means and what the risks are. I don’t know exactly how this should be done, but this push to make users share more and more private information online isn’t doing any good.

If the big social media sites won’t budge, the only other hope is a culture change from the users. Strange as this may seem to some people, until a few years ago the world functioned perfectly well without Facebook. Social media itself is undoubtedly here to stay, but do we really have to keep the whole world informed of every aspect of our wildly trumped-up social lives? Not all techno-crazes stick around – few people today want the latest Jamster ringtone (thank God). It would, I think, be better if this fashion for sharing all your information online became a passing fad – maybe with a return to old-fashioned offline boasting. If this sounds too difficult, just think what we could achieve. When the Establishment creates the Ministry of Online Privacy, we’ll know they’re rattled.

[1] In Foursquare’s defence, it’s only fair to point out they did block access to Girls Around Me as soon as they found out about it. However, all this really proves is that next time you want to use Foursquare for snooping or stalking, you just make sure they don’t know what you’re up to.

27 February 2013, 12:18 pm

All hail the Ocelot

Linux and open source software isn’t for everyone. But it’s a good way to learn how software is developed and tested.


As well as preying on rodents and resting in trees, ocelots are surprisingly skilled in optimising recently-overhauled desktop environments.
(Photo: Danleo, Wikimedia Commons)

Yesterday (October 13th) was an exciting day for many reasons. It marked the first anniversary of the completion of the rescues of the 33 Chilean miners. Classic 80s movies fans saw the return of Ghostbusters to the big screen. It was also the day to celebrate 65 years since the adoption of the constitution of the French Fourth Republic. All of these fascinating events, however, paled into insignificance against the most eagerly anticipated event of all, which is the release of Ubuntu 11.10, codenamed Oneiric Ocelot.

For those of you who don’t know what's so Oneiric about an Ocelot, I should explain what all the excitement is about. Ubuntu is a Linux-based operating system, which works as an alternative to Windows, and this is their latest six-monthly upgrade. (If you want to know why you’d choose to name an operating system after a South American wildcat, this page should explain.) Like most Linux distributions, it’s free – and not just free to use (like Adobe Flash Player or Microsoft Word Viewer is). It’s free for anyone to copy, modify and redistribute, as long as any derivative you produce is also free to modify. Only a small number of Linux users actually modify software this way, but the fact this is possible has a huge influence on how Linux is developed. Windows fans argue Linux is just a mish-mash of cobbled-together software written in backrooms, whilst Linux fans argue that the open collaborative way Linux is developed is actually better than Microsoft’s work behind closed doors. Anyway, the arguments could go on for years, but this is a blog about software testing – anyone who wants to continue on this subject can read why Windows is better than Linux or why Linux is better than Windows.



From a software tester’s point of view, however, there is a big advantage to Linux: you can learn a lot about software development and testing. All of Ubuntu’s Alpha and Beta releases are publically available to download and try out, and it’s a valuable lesson in just how much work is needed to test and stabilise software. The Alpha 1 release typically comes out after just 1½ months of development with all the major changes already in place – but expect the flashy new features to crash the system the moment you sneeze at it.[1] It is only over the next 3 months that later alpha releases transform the bug-ridden mess into something reasonably stable. The Alpha releases are also the stage where features get pulled – they can be features that looked good on paper but don’t work in practice, or simply features that got a hostile reception from early adopters.

When you reach the Beta releases, you’ll probably come across a system that looks all polished and ready to go. It isn’t. It may load up fine, and all the programs you fancy using may fire up fine and appear to work, but it’s only when you try using the programs that you run into annoying bugs that haven’t been picked up yet – bugs that can still add up and render the system unfit for purpose. This sort of thing, I suspect, is a trap many projects fall into: impatient testers or managers try out beta-quality software, watch it run smoothly on face value, go ahead and deploy it, and learn its shortcomings the hard way.

Then there’s the open bug tracking system. Anyone who finds a bug – whether in an Alpha, Beta or stable release – can report it. But if you want your bug report to be taken seriously, you have to do it properly. Simply writing “Firefox didn’t work” is useless. If, however, you state exactly what Firefox is doing wrong, what you were doing when this happened, whether this error is reproducible or just intermittent, what version you were using, and anything else that might help developers pin down the bug, you will be more likely to get the bug fixed. If the bug you’ve found has already been reported, you can look at the progress of the bug report to see how it was handled and how it was dealt with.

Following open-source projects doesn’t teach you everything. The kind of testing you can observe – the unstructured error reporting from users as and when they come across bugs – is useful, but the kind of work done by paid software testers, in both open- and closed-source, is systematic testing designed to track down bugs and make the system fit for purpose. There’s many other concepts in software testing, such as testing models, reviews, static/dynamic analysis, that generally goes unnoticed by users of alpha and beta releases. But it’s still a good way to try out the world of software testing, and if you find it interesting, perhaps you can make it your full-time job.

Anyway, Precise Pangolin Alpha 1 comes out on 12th December 2011. I can hardly wait.

[1] Oh, and if you are thinking of trying out an alpha release, you probably want to do it on a spare computer. In theory, an alpha release of an operating system can sit quite happily alongside a partition of Windows or Linux that you use for work, but as test releases by their very nature are liable to do disastrous things you didn’t expect, you probably want to keep it out of harm’s way.

27 February 2013, 12:17 pm

Rest in peace, Steve Jobs



The first thing discussed at work today was, of course, the death of Steve Jobs, aged only 56. The news was not entirely unexpected - his retirement from apple earlier this year made many people suspect this day was coming - but few people expected this to happen so soon.

When you're a advocate of Microsoft/Apple/Linux, it's tempting to do nothing but pick faults with the two competitors. I have had a go at Apple for their patent lawsuits against Android smartphones. But that should not distract us from what Apple has achieved under his leadership. Technology is not just about creating something new - anyone, for instance, could have created a miniaturised computer capable of playing MP3 files - it's also about recognising what people want. There is no shortage of inventions out there that failed to take off simply because people saw no point in switching from what they were using before. But Steve Jobs had an extraordinary talent for identifying what will grab people's interest, how to sell these ideas to the public.


In the years when the world subscribed to all things Microsoft, Apple kept a niche with the iMac: tightly integrated software and hardware, praised for its reliability, and still the number one choice for artists and graphic designers. With the iPod, iPhone and iPad, Apple pioneered products that were previously unheard of to the everyday public. Google's Android has since taken a significant chunk of the smartphone and tablet market, and Microsoft's Windows Phone 7 can't be dismissed just yet, but no-one can take the title from Jobs as the man who introduced these products in the first place.

We will never know what future innovations Jobs may have brought to Apple, but one thing is certain: the loss of Steve Jobs yesterday is a huge loss to both Apple and the world.

27 February 2013, 12:17 pm

The Ghost of Vistas Past

Damage to consumer confidence can haunt you for a very long time. Windows Vista is the classic case.


In case you’ve been locked up in a wardrobe for the last two months, Windows 8 is on the way. At the launch a few weeks ago, they demonstrated how the next version of their operating system is designed to work in tablets. The fact that Microsoft is focusing on tablets is interesting, because it shows how high the stakes are. For over a decade, bar a few niche markets (Macs for high-end users and graphic designers, Linux for the tech-savvy), Microsoft has been the undisputed king of Desktop PCs, and none of Microsoft’s competitors are anywhere near taking their crown.

The problem is: they don’t have to. The computing market is moving on. Many things that used to be done on a Windows XP machine can now be done on a smartphone or a tablet, and consequently, many Desktop PC users are switching to these devices. And so far, both tablet and smartphones are dominated by Apple and Android. The nightmare scenario is that Android makes the leap from tablet PCs to the desktop and undercuts Microsoft’s safest market. Little wonder Microsoft wants Windows 8 established on touchscreen computers so badly.



It should not have been this way. Tablet devices such as the iPad are really just laptops with touchscreens instead of keyboard, so Windows ought to have had an easy transition from one to the other. Instead, tablets are being treated as oversized mobile phones, and Apple and Android, both miles ahead of poor old Windows Phone 7 in the market share, got there first. The thing is, Windows Phone 7 actually got a fairly good reception on its launch, and Windows 7 on the desktop didn’t do too badly either, so what is going wrong in the touchscreen market?

The answer, I suspect, is lack of consumer confidence. It came to a head with Windows Vista, and Microsoft never truly recovered from it. Now, it would be easy to say that it’s all Microsoft’s fault for taking customers for granted and making no effort to get their stuff working properly. I don’t think it’s that simple. Microsoft is not run by IT-illiterate pen-pushers, and they are quite capable of producing popular and reliable devices – the success of the Xbox and Kinect speaks for itself. So where did they go wrong with Vista? Without an open test and development plan, it is hard to know what they were thinking, but my theory is that it wasn’t that they didn’t see the need to test; they just underestimated the work that needed to be done. For what it’s worth, I think the key mistakes were:
  • Lack of attention to hardware compatibility. In the days of Windows XP, Microsoft could get away with expecting manufactures to get their hardware compatible with Windows (on pain of going out of business). When Windows Vista came along, suddenly Microsoft had to do it the other way round, and evidently didn’t realise the amount of work involved.
  • Too much trust in the upgrade facility from Windows XP. Upgrading an operating system has never been that reliable, and whilst there was no harm in providing the facility for those people who understood and accepted the risks, it was a big mismanagement of expectations to present this to Joe Public as the quick and easy way of moving from XP to Vista.
  • Underestimating the implications of a five-year gap between releases. Between the release of XP in 2001 and the release of Vista in 2006, we saw the widespread adoption of home broadband, wi-fi, CD writing, digital cameras, MP3 music, online financial transactions and – unfortunately – a whole load of security threats abusing these technologies. All of these were accommodated in Windows XP with one sticking plaster after another. Incorporating all of these into a consolidated modern system was inevitably going to take a long time to get right.
  • Over-dependence on high-spec systems. Microsoft products had been criticised before for getting more bloated as computers got faster, but Windows Vista took this to a whole new level, with even new computers pre-installed with it struggling to meet the system demands. Some performance testing on lower-spec machines should have set alarm bells ringing much sooner.
  • Lack of caution with digital rights management. It’s not fair to blame DRM on Microsoft completely, because they were leaned on by the big film companies (who, let’s be fair, had their sources of revenue to worry about). But when you introduce a feature that’s designed to restrict what you can do rather than enhance it, the last thing you want to do is end up also stopping users doing perfectly legitimate thing. DRM was always going to be controversial, but giving the impression that faults elsewhere in the system was a price worth paying was really asking for trouble.
I could be wrong; for all I know, it was a different set of mistakes. But there was a little doubt of the result: Windows Vista needlessly reinforced Microsoft’s reputation as the unreliable software we all love to hate. Windows stayed king of the desktop PC for one reason and one reason only, being that most people considered switching to the alternatives too much work or too expensive. And in the smartphone and tablet market, where Microsoft doesn’t dominant, that’s not good enough. The moral of the story is that even after you correct your mistakes (as Microsoft largely did with Windows 7), the damage to your reputation can haunt you for a very long time.

Microsoft will survive somehow. We saw with the Xbox that Microsoft can still compete, and we saw with the Kinect that Microsoft can still innovate. Microsoft has doing better in the server market than it used to. Even if the apocalyptic predictions of the demise of the Desktop PC come true, Microsoft has deep enough pockets to hold out until they find a new role in the IT market. But when we are even contemplating this of a company that once wowed the world with Windows 95, something has gone seriously wrong, and the rest of the world needs to learn lessons from this.

27 February 2013, 12:16 pm

How to win attention and annoy people

Search Engine Optimisation is big business in IT. It’s just a pity it’s become so intrusive.

It used to be this simple
(Photo from SMBSEO.com)

Can I have your attention please? I apologise in advance, but I am about to abuse my position as a software tester. No, I’m not going to sell confidential client information to Russian spies or anything like that, but I am nonetheless going to misuse this blog to further my personal interests outside of my job. All right. Are you ready? Let’s do a countdown and get this over with. 5 … 4 … 3 … 2 … 1 …


Actually, you needn’t click there if you don’t want to. I’m not too fussed either way. For those who didn’t bother clicking, that was a link to my web site on play writing, which is what I do in my alternate life. I don’t care too much whether you view it – seriously, there can’t be that many people with interests in both software testing and theatre in the vicinity of Durham – but that’s not the purpose of the link. The purpose of the link is for Google and other search engines to know it’s there. Because the more links Google finds to your page, the higher it gets up the page rankings.


It used to be so much simpler. In the olden days, if you wanted some builders in Woking, you looked until “Builders” in the Yellow Pages. Builders and other businesses paid for advertising space, with more money for a bigger advert, and unless you traded as Aaron A. Aardvark or Zzacharias Z. Zzyzz, there wasn’t any real way of gaming the system. This all changed when the internet came along. The early search engines gave the top entry to whichever entry put the search term in the text and keywords the most often. This was a reasonable idea – after all, if you’re looking for a web page about Yorkshire, you probably don’t want a food menu from a pub in Dorset that happens to include Yorkshire puddings in its Sunday roast – but this inevitably resulted in every builder in Woking entering keywords of BUILDERS BUILDERS BUILDERS BUILDERS WOKING WOKING WOKING TRUSTWORTHY RELIABLE QUALITY etc. etc.

So when a couple of researchers at Stanford University came up with the idea of “PageRank”, which instead considered how many websites link to yours (and how prominent the linking pages are), Google became the overnight success we all know about. But anyone hoping for an end to search engine wars can be disappointed. I confess, I find chasing pageviews on this blog and my own site addictive, but I have better things to do than put links on as many sites as possible. If, however, you’re business dependent on web visibility, there’s a lot more at stake. And this is why Search Engine Optimisation (SEO) is such big business.

Now, it wouldn’t be fair to portray SEO companies as the bogeymen. Plenty of SEO techniques, such as placing appropriate links on other websites, are considered perfectly legitimate. I have absolutely no problems with my Google searches being made relevant to what I’m looking for. If I’m looking for builders in Woking, I’m quite happy for SEO companies to ensure that no-one I might be interested in gets overlooked. The problem is that once market forces come into play, a “relevant web experience” often means trying to bombard users with whatever gets money out of them. As soon as Google started judging importance on links from other sites, attention turned to these other sites – and the lengths some sites went to was astonishing. Blogs and open wikis used to get plagued with irrelevant links (with reasons for the link frequently no better than “check out this cool site”). Many platforms, including Wordpress and Wikipedia now use the “nofollow” tag to stop this practice paying off, but whether this actually deters link spammers is anyone’s guess.

The lengths some sites go to is astonishing. There is a big business is linkfarms: sites that serve no function other than trying to push up another page’s Google place. Sites that get caught by Google are, in effect, disqualified and put to the bottom of the list. One high-profile casualty in 2006 was BMW. Three years earlier, a company called SearchKing was rumbled and penalised for blatantly gaming the system, who promptly resolved by suing Google. They got nowhere, but it says something about how much some people consider buying their Google rank as entitlement.  Lately, Google appears to have gone to war with WebPosition Gold for sending automated queries to probe Google’s rankings (and, one might suspect, find the loopholes). But the question remains: how much of this practice goes undetected?

Then there’s the practice of drawing people to your site who were looking for something else. I got a surprising number of visitors to my blog entry about software patents who were looking for pictures of the Montgolfier brothers. That was purely by accident, but there is a growing suspicion this sort of thing is being exploited on purpose.  BMW was found to have redirected users to a site with far fewer keywords that the user searched for. It was claimed by Private Eye that journalists are encouraged to put popular search phrases into articles in order to increase web traffic, and therefore advertising revenue. There's no knowing where this will end.

Is there a solution to this? I honestly don’t know. I’ve previously argued you could solve the software patent problem by scrapping patents, but you can’t exactly solve this problem by scrapping search engines. It’s all very well telling Google to try harder, but they are already in a fight to stay one step ahead of the link spammers. I’m almost tempted to suggest a return to an internet version of the Yellow Pages, where people looking for adverts can go to a web page where prominence is once more governed by how much you pay for advertising cyberspace – but as paid adverts are an even bigger pain in the backside, I can’t the public buying into this idea.

Don’t get me wrong: no-one can deny the benefits that the internet has brought. People like me could not hope to get the word out on what they’re doing in the days of the printed media. And blaming Google for the search engine war that followed is like blaming YouTube for Justin Bieber. But I can’t help thinking that in the days of Yellow Pages, at least life was a lot simpler.

Other Sections