While doing a little research this morning, I stumbed onto a paid advertisement within Google for Ask.com, informing me that I could booking BreakFree hotels & resorts from within the Australian localised Ask.com portal.
Being the curious kind of person, I followed their advertisement and was quite shocked by how deceptive they were with their ad and also the page it took me to.
Instead of Ask.com providing some sort of useful service inside their portal, they provided 10 Google advertising results front and center which were displayed as though they were organic results, followed by actual organic results (click the image for an expanded screenshot of their handy work).
I don’t necessarily have a problem with then doing paid advertising within Google for services that they offer (though in this case, they don’t have a service relating to my search results which was very deceptive). However, I do have a beef with the way they frame or lack there of, of the paid results from Google within Ask.com search results. If they had placed the same 10 results in the right hand side gutter or boxed them with a different background colour – then at least the user would have a chance of knowing the difference.
I wonder whether or not that sort of behaviour falls within the Google terms of service? It actually reminds me of when Microsoft were advertising on Google for MSN Messenger and taking the user into more search results within Live Search.
Virtually every webmaster has heard of Google Webmaster Tools and use it regularly to check on the health of their site, unfortunately very few know of Live Search Webmaster Center which complements Microsoft Live Search.
Recently I wrote about the significant improvements that Live Search Webmaster Center has gone through, which has really boosted the product. To put the new enhancements through their paces, it seemed like a good idea to compare what it was displaying versus what Google Webmaster Tools was showing.
Live Search Webmaster Center showed that I didn’t have anything wrong with my robots.txt file, nor was I suffering from long and complex dynamic URLs – however I did have a handful of 404 errors through the site. Live Webmaster Center had picked up that I had linked to another site without the http:// in the
href attribute, like:
- <a href=”www.domain.com/important/article/”>important article</a>
which when clicked, was delivering a 404 error on my site with a URL like:
To my surprise, when I explored that same information within Google Webmaster Tools – they had not picked up that I had linked that article incorrectly.
Moral of the story, don’t put all your eggs in one basket. While Google hadn’t picked it up or had just compensated for my mistake – simple mistakes like that may have an adverse effect on less capable search engines.
This week Google Analytics received a small upgrade – specifically related to the login process.
Until now, no matter how often you use Google Analytics, as a user you were forced to login every time you returned to the site. It frustrates users so much that if you use Google Analytics quite a lot, it became a habit to leave a window open with Google Analytics logged in just for the simplicity.
With the latest update, the Google Analytics team are saying that you no longer need to login and that the process has been streamlined. I’d argue that only part of that statement is true, you do not need to authenticate – however it isn’t streamlined.
The majority of other Google services, once you’ve authenticated once and subsequently return – it reads in your Google Account information and you immediately have access to the service. For some reason, the Google Analytics team have chosen against a consistent authentication progress that is common amongst many other Google services and the user is forced to click a button to enter.
The process won’t be streamlined until it functions like Google Mail, Google Reader and so on. I welcome the improvement – at least I no longer need to type in my account information all the time – however since they already know that I’m authenticated, I shouldn’t need to click again to re-enter the application.
In the last few days, the Live Search Webmaster blog have posted about two significant improvements to the webmaster center, how Live Search crawls your site and more detailed backlink information.
Live Search Webmaster Center now supports the following four items, which are a great help in identifying problems with your site and how Live Search is spidering your content:
- File not found (404) errors, a straight forward date stamped account of the HTTP “404 File Not Found” errors that Live Search encountered when crawling the site. Conveniently, this includes broken links within your own site and sites that you are linking to.
- Pages Blocked by Robots Exclusion Protocol (REP), reported when Live Search has been prevented from indexing or displaying a cached copy of the page because of a policy in your robots exclusion protocol (REP).
- Long Dynamic URLs, reported when Live Search encounters a URL with an exceptionally long query string. These URLs have the potential to create an infinite loop for search engines due to the number of combination’s of their parameters, and are often not crawled. I haven’t come across one of these yet and so far I haven’t seen any documentation of what ‘exceptionally long’ means, so clarification on that point would be handy.
- Unsupported Content-Types, reported when a page either specifies a content-type that is not supported by Live Search, or simply doesn’t specify any content type. Examples of supported content-types are: text/html, text/xml, and application/PowerPoint.
In 2007, Microsoft removed the ability for users to drill into backlink data within Live Search. It took a long time, however that functionality has now been replaced within Live Search Webmaster Center and is looking quite promising.
Common functionality shared between the crawl information and back link data, is that Live Webmaster allows you to download the information CSV format. Possibly the best feature for a large complex site though, is that each of the above options can be filtered (search style) further by entering in a subdomain and/or directory to restrict the results to. The backlink interface additionally supports a top level domain in the search box, allowing you to isolate only back links originating from an Australian site by entering in .au.
The interface doesn’t support paging of results, in case you want to step through a few pages without wanting to export information in CSV format. If you do want to download more information, there isn’t an option to export all information in a hit – you can only retrieve 1000 lines of data. I can appreciate that they don’t want to provide an ‘all’ option or that they want to limit how many can be fetched at once, however there isn’t a way to set 1000 items per page to download them and then go to the next page and download them. The other issue with the 1000 lines of data, is that there is no information on how the 1000 lines are selected. As an example, the backlink section uses the language ‘Download up to 1000 results’ – however there isn’t any indication of how the 1000 are selected.
While there is still room for improvement and really, when isnt there, I’m personally encouraged by the changes that Microsoft are making to Live Search Webmaster Center. The sooner services from Microsoft start to catch up to other services offered by the leaders – the sooner more businesses and webmasters will spend investing time into the Live Search product.
Approximately six months ago, I mentioned that I was going to conduct a small test regarding the impact that optimising the HTML <title> element has from a search engine optimisation stand point.
In December when I wrote that, #if debug was a very new site – in fact it has been online for exactly one month. It’ll come as no surprise that no one knew about the site, in fact even to this day a limited number of people know about the site. Fortunately though, I do have evidence to suggest that more know about it now than they did in December!
In the announcement, I had said that a 10% increase in traffic would have been considered a success. Given that the site was taking approximately 60 visits per week at that time – optimising the <title> attribute would need to increase that to around 65 visits to be considered a success.
In the image above, you can see if the effect of optimising the <title> element in the HTML. The change was made at the marker point directly above the 25 in “Nov 25, 2007 – Dec 1, 2007”. I’m not sure what caused the dip in traffic immediately after the change, however once it recovered – the increased traffic has been maintained or increased. The marker point second from the right delivered a whopping 67 visits for the week and as such I’m going to claim this a victory (even if it is very very small!).
In the following few months, this tiny site has grown from zero visits and has steadily been increasing month on month to a lofty figure a little over 400 visits per month! I realise that isn’t a lot of traffic by anyones measurement, however for a site that has had very little effort put in and next to no attention directed its way – it isn’t half bad.
Bitbucket is the latest project by Jesper Nøhr. If the name looks familiar, it’s because I wrote about a Jesper in March when he used Django and Python as a rapid development environment for an indy advertising product named Indiego Connection.
This time around, Jesper has moved gears to provide a hosting for a popular distributed version control system named Mercurial. I haven’t started drinking the distributed version control kool-aid just yet, however it has been gaining a lot of attention lately via another open source product named Git, developed by Linus Torvalds – the creator of the Linux kernel.
The Mercurial hosting provided by Bitbucket comes in a few different flavours, one of which is free and allows up to 150Mb of storage. I really like the fact that they are not attempting to offer a completely free service, if they were – I suspect that it’d be under enormous pressure. The cost of using Bitbucket to host your Mercurial repositories is very reasonable, starting from $5/month and stepping up to $100/month which includes 25Gb of storage.
Bitbucket provides a very convenient interface for interacting with the Mercurial repositories. As with most web interfaces to source control management packages, you can browse through different repositories, see all of the changes flowing through them and compare them if you like. A couple features that simpler products don’t support that I like is that you can ‘follow’ a repository, create queues for patches related to a repository, download the repository at time x in zip, gz or bz2 formats and it provides an easy to understand visual linking between changesets.
If you are looking for Mercurial hosting, I would definitely investigate whether Bitbucket is a suitable candidate to store whatever you need versioned. The service certainly looks the goods and from what I’m reading online, it is getting really solid reviews already.
Today the automatic update kicked in for Java on my notebook, which it does quite regularly. I love the fact that different products implement a relatively unobtrusive upgrade to their software to keep it up to date, I know if they didn’t – all of my non-critical software would quietly go out of date.
During this particular update, I happened to notice (not sure if it was there before) – however Sun are now bundling (optionally of course), Google Toolbar with the Java installer. I’m all for providing the automatic update, however I don’t believe they should be bundling additional software, optional or otherwise with an automatic update.
I have no issue if you just installed Java for the first time and you have chosen to install the additional software, however adding it into an update and having it enabled by default is just a little to slimy for my liking.
Google have simplified the account management interface for Google Analytics. Previously when adding a user into the system, you needed to provide:
- an email address of a a valid Google Account
- first name
- access level (administrator/reporting)
It appears that you no longer need to provide the first name and surname information. Interestingly though, they have not been marked optional fields, they have been completely removed from the interface.
To my knowledge, the first name and surname information isn’t visible anywhere within Google Analytics (please correct me if I’ve just missed it). If it isn’t displayed or is in limited use, it’s possible Google realised that they were increasing the barrier of entry for no tangible benefit or that they were duplicating information already available within a Google Account.
I’m an advocate for sensible usability on web sites and fully support the usability guidelines that recommend descriptive link text. There are measurable improvements to a users browsing experience when a webmaster makes a conscious decision to use useful link text, instead of an uninformative ‘click here’.
One particular aspect of useful link text that I try to abide by at all times, is that the link text should be descriptive and should reflect the resource that it is linking to. As an example, if you’re linking to a web page about the Porsche 911 GT3 RS, then a useful link might be Porsche 911 GT3 RS.
A popular technology site, TechCrunch has various web real estates that it promotes at every opportunity – however I think of late they are going a little too far with their frivolous, slap happy linking. Recently, the Governor of California, Arnold Schwarzenegger announced that California has secured the manufacturing plant from Tesla, bringing it back from New Mexico.
In the article on TechCrunch, they provide a number of links (link text and URI below):
- Tesla Motors, http://www.teslamotors.com/
- the Roadster model, http://www.crunchgear.com/tag/tesla/
- “Come with me if you want to live”, http://www.youtube.com/watch?v=hHV6OzHjWV8
- “Do it, do it now”, http://www.youtube.com/watch?v=u6ALySsPXt0
and my beef is with the second in the above list. When viewing that article, I expected that link to take me to the Roadster vehicle home page within the Tesla Motors site, instead if took me off to a completely useless page regarding Tesla Motors (the company) within their business information site CrunchGear.
I’m all for TechCrunch promoting their other web assets, however I’m confident that their readers would enjoy their site that much more if they’d find a more appropriate manner in which to promote CrunchGear instead of deceptively linking into that site.
Google News is a great service, probably the single best feature of Google News is that it aggregates news stories from numerous sources into one place and then condenses them, so as a user you don’t need to be bothered by or read the same story more than once. As with everything else Google related, its driven by clever algorithms in how it decides what to collapse/consolidate, the snippets to show and images to associate with a given topic or news item.
When viewing the Australian Google News page today, I stumbed across something that I thought was quite funny. In a moment of algorithms acting badly, they had managed to associate an image of Pamela Anderson against a collapsed set of news items related to regional council mergers in the Northern Territory. Clicking on the Pamela Anderson photo took you to the appropriate story, so that part of the system was behaving correctly – just that she was being associated to Northern Territory council mergers wasn’t!