April 24th, 2009
January 8th, 2009
This blog has moved to a new address. I was never completely happy with the URL of this one and the new URL reflects the focus of the blog going forward and should also be a bit easier to remember. Please come and join me over on the new site.
Introductory post: http://www.thesocialtelco.com/2009/04/24/the-social-telco/
Don’t forget to switch your feed readers over too!
December 10th, 2008
I’ve come across a variety of statistics recently from various surveys about communication preferences, and was tempted each time to do a post. Instead, I’m doing one post on all of them, which should allow for some bigger-picture thinking. In essence, the conclusion you naturally come to when reading these articles is that landline telcos are in for a nasty period of rapid decline in their core business thanks to the communication preferences of the rising generation. But there are things they can do to manage and slow this decline and remain relevant.
The first couple of articles concern the trend for greater use of mobile devices and the decline in the number of landlines:
The second set concerns the communication preferences of younger people (often described as Millennials):
Both sets of articles, though, are really about the changing communication preferences of the population as a whole, and the impact of that younger group in particular. Those currently aged 15-25 are growing up with a radically different set of communication behaviors and preferences from those embraced by even 25-35 year olds, let alone the older generations. And this will have a massive impact on the landline telcos around the world, which don’t really feature in this picture at all. As the rising generation makes up an ever greater proportion of the total population this impact will only increase.
Mobile substitution happening from the bottom up
First, the increased use of mobile devices and abandonment of landlines. I remember talking to Gavin Patterson, then head of the consumer retail bit of BT, about six or seven years ago, about the challenge of driving growth in his business, and he told me his worst nightmare was a generation of kids growing up never having a relationship with BT. Sadly for him, and other landline telcos around the world, the nightmare is now reality. The CDC survey both articles are based on tells us that 17.5% of households have no landline but do have wireless phones. However, the most striking statistic for me is this one:
Nearly two-thirds of all adults living only with unrelated adult roommates (63.1%) were in households with only wireless telephones. This is the highest prevalence rate among the population subgroups examined.
You’d better believe that that’s mostly college students and those recently graduated from college and still living with roommates, almost all in the 18-25 category. Here’s more detail on the age split overall:
More than one in three adults aged 25-29 years (35.7%) lived in households with only wireless telephones. Approximately 31% of adults aged 18-24 years lived in households with only wireless telephones.
Remember that a good chunk of 18-24 year olds live with their parents and thus technically have landlines in the home even if they don’t ever use them. The question is whether these people will ever return to the habits of their parents as they get older, settle down and have kids of their own. There’s not that much evidence yet to suggest that they will, and there’s not much incentive to either. It used to be that a landline from the phone company was necessary to get broadband but since ‘naked DSL’ is now widely available and cable competitors offer TV/broadband packages without voice that’s no longer the case.
The next question is whether these future households will have landline connections at all - with the increasing availability of 3G and impending availability of 4G wireless options for web access and an increasing preference for web-delivered rather than broadcast/linear video content, I’d question whether these households will need a wireline connection - from a telco or a cable company - at all.
Voice isn’t even a communication option for most young people
Of course, all this assumes that voice is still one of the main modes of communication for young people, but the second set of articles suggests this isn’t the case either. The ReadWriteWeb article cites an eROI study on the communication preferences of high school and college students and includes this chart from the survey:
One caveat: the survey seems to have asked about online communications specifically, but from other surveys I’ve seen and personal experience with teenagers voice would barely make a blip on charts like this even if it was included. But the other key thing is that email - so newfangled when it first entered most people’s lives in the mid- to late-90s - is becoming distinctly passé. Text messaging already enjoys a much higher use rate, and both the combined social networking categories and the combined IM categories in the chart above already add up to the same as email (26%). IM seems to be on the decline with the exception of social networking IM but texting and social networking are now the major components of online communication for most young people. And none of those services is provided by a telco either. Wireless telcos have the best opportunity for capturing some of this spend by creating easy-to-use and low-cost wireless options for using these things on mobile devices, but landline telcos risk being entirely marginalized.
It’s grim reading, all of this, if you’re a landline telco or someone who works with them. Is there anything they can do? Yes, absolutely. They should immediately begin (if they haven’t already) building partnerships with social networks and other online providers to ensure that the necessary interfaces are in place to allow telco services to be linked in to those environments. BT’s acquisition of Ribbit is a great example of an innovative approach to tying online and landline worlds together, and Telecom Italia has also done clever things with Facebook, allowing customers to make calls from within the Facebook site, for example.
Telcos need to offer deep integration both ways between their systems and these online service providers’ systems to allow address book sharing, easy initiation of old-fashioned phone calls and other methods of telco-based communication from within websites and otherwise make the linkages between the two worlds as clear and easy as possible. Telcos have no hope of creating standalone offerings for young people that will generate any kind of real interest, but partnering with the sites where those young people already spend their time is the next best thing. Allow those companies to innovate, and offer them things they can’t easily do as an incentive to partner.
All is not lost - yet. But it’s certainly heading in that direction, and only innovative telcos willing to really rethink the way they engage with teenagers and young adults will have any chance of staving off the steep decline that seems to be on the cards.
December 10th, 2008
The Dilbert comic strip for today was a perfect trigger for me to revisit something I’ve been thinking about for a while, and that’s the impact of apathy and laggards on the adoption of new technologies in telecom (something I promised to write about here), but almost more importantly the abandonment of old technologies. Here’s the strip:
The key point that I’ve been thinking about recently is that these late adopters or laggards have a pretty dramatic effect on telecom spending, in that they considerably slow the rate of change, and especially the rate at which old services decline.
A case in point: AT&T’s standalone long distance voice service. This used to be AT&T’s bread and butter, alongside its services for business customers, and it sold it aggressively (as illustrated by New Yorker cartoonist Kim Warp in another cartoon, at left). In 2001 it made just under $14 billion from these services and made up the vast majority of its consumer revenues. Around that time, local telcos such as Verizon, SBC and BellSouth began marketing their own long-distance services, sold as part of a bundle with local services and over time broadband and video as well. Also, in 2003 the FTC introduced the ‘Do Not Call’ list, preventing companies like AT&T, MCI and Sprint (and also obviously many others in other industries) from cold-calling non-customers to sell their services. This in essence destroyed the main way these companies signed up new customers.
As a result of both of these trends, AT&T saw its long-distance revenues drop from that $14 billion in 2001 to $10.4 billion in 2002 and $7.5 billion in 2003. That’s a pretty steep decline for a core service. But look at the equivalent number for 2007: $3.7 billion. Yes, it’s about half what it was in 2003, which is a steep decline. But look at it a different way: this is a service that AT&T hasn’t marketed for five years and which competitors have been aggressively selling against for even longer. And yet the subscriber base has only declined by 50% in four years.
That’s actually pretty remarkable! Every one of the customers still using AT&T for long-distance must have been contacted by their local phone company to add long-distance to their package, and yet they’ve resisted? Why? Because these people are by their very nature apathetic about changing services that serve them perfectly well - they’re classic laggards when it comes to new technology. And yes, I’m sure the over-60 set is unusually well represented among this group as the Dilbert cartoon seems to imply.
But they’re not the only ones. A few years ago I heard one manager at a telco talk about two types of customers in the context of bundled services:
- Those who were cash rich but time poor. This group tended to be favor a telco bundle, because the money saved by switching to several individual products at a lower price was worth less to them than the time they’d spend researching, choosing and switching replacement products. Busy professionals earning a decent living would be among the prime examples.
- Those who were time rich but cash poor. These people needed to save money wherever they could and had the time and inclination to hunt down the best possible deal, even from multiple separate providers if appropriate.
Laggards don’t just include the seniors who simply can’t keep up with changes in technology and just want the phone to work. They also include those in the first group above - young and middle-aged, affluent people. The interesting thing is that, although all providers naturally chase the group most likely to switch, this is often a mistake. The laggards can often be far better customers, because once you’ve got them they’re much less likely to switch to someone else. The second group would have taken up the old AT&T, MCI and Sprint on every offer presented during a telemarketing call and probably ended up being paid to take long-distance service from whichever provider they were using at any given time. Those are the worst customers in the world to have!
Telcos should be doing everything they can to ensure that they can hold on to those customers that are motivated by inertia more than they are motivated by cost savings. A former colleague of mine did some analysis a few years ago about net present value related to wireless subscriptions, and he discovered that the ‘glovebox phone’ - the cellphone someone signs up for and then stashes in the glovebox in the car for emergencies - is the highest net present value subscription that carrier has - lots of revenue, hardly any cost. And they’ll never churn.
In all the segmentation work telcos are currently engaging in, they need to ensure that this group is identified and respected for what it is: the backbone of their business. Obviously a telco can’t afford only to attract those customers in this laggard group - at least at current revenue levels. There are many more customers who are much more likely to churn, and lost customers to win back. But this group of customers can be the anchor for a broader customer base, and provides the economies of scale to keep costs low for the rest of your customers. This group of customers provides a massive chunk of overall cash flow and margin. So this segment should be rewarded for its loyalty with perks and appreciation. In addition, telcos shouldn’t take steps designed to save 1-2% of customers which reduce prices for the other 98% of customers as well, including those who were already entirely happy with their service and very unlikely to switch.
On the flip side, though, telcos also need to understand that these customers will still be on an old network even when the vast majority of the base has migrated to the new technology. Shutting down analogue cellular networks was delayed for a long time because of the rump of subscribers still using those old phones (or at least keeping them in gloveboxes). ATM and Frame Relay networks (yes, this effect applies in business too - businesses can be at least as inert as individuals) will have to be run for several more years even if the vast majority of customers have migrated to MPLS. This means you have to find ways to migrate those customers gracefully when the time comes, in such a way that they suffer no disruption and the service works the way it always did (don’t be tempted to think that it has to be better - they won’t see it that way).
Overall, my key message is that this group, and the effect of inertia and apathy on telecoms growth, cannot be ignored. These are some of the most valuable customers telcos have, they need to be served differently, and they don’t want to migrate to the latest greatest product or platform. Telcos need to understand all of that better and act accordingly.
December 5th, 2008
There was yet another article recently on the topic of why we all need faster broadband - this time on the GigaOM site. What’s funny is that the same arguments have been made for faster broadband ever since the days of dial-up, and they really haven’t moved on that much. In brief, here are the three reasons the author thinks government should invest in broadband:
Today the New York Times ran an article about the rising costs of a college education and offered up the idea of distance learning as being one solution to rising costs. I don’t think distance learning can substitute for the entire college experience, but having participated in several distance learning classes, it can be used in conjunction with meetings online or weekly in-person meetings to create a rich learning and discussion environment. Broadband makes that possible today, and faster speeds will only add to the interactivity of those online environments — making a college education more accessible. The kids who most benefit from this are not living in FiOS areas; they are in poorer areas where ISPs try to avoid or delay launching high speed services. I know, I live in one of those areas. The government needs to step up to improve this access divide.
Medical Care Improvements
Broadband also can save on medical costs and improve access to health care. A release issued todayhighlighted radiologists’ frustration with quality of care. Ninety-four percent of radiologists surveyed blamed missed or delayed diagnosis on the inability of medical imaging systems to communicate with information systems of physicians and hospitals. Delivering radiological scans via broadband requires fat pipes and rapid speeds, but the benefit to patients, insurers and doctors would be many: fewer scans, faster delivery of images where they are needed, and lower costs associated with the process.
Another benefit of better broadband would be the ability for people to telecommute. This has far-reaching benefits, from fewer cars on the roads to increasing a family’s resilience in the face of economic uncertainty. As a telecommuter, when I change jobs I don’t have to sell my house, uproot my husband’s career or leave the network of friends and family who support us. The more people who have that flexibility, the less traumatizing job loss can be both for the individual family and for a particular region.
Education, telemedicine, and telecommuting are all arguments that have been used from the beginning. So is the issue really insufficient speed at this point? No - it isn’t. Cultural issues are a much bigger barrier to these things than Internet speeds are.
The reason we don’t have more telecommuting? Because many companies still don’t believe in it, or provide it as an opportunity for their employees. The vast majority of employees of the vast majority of companies have plenty bandwidth available at home - 3Mbit/s or more - for $50 a month or so, which would be plenty to do most desk-based jobs - certainly enough for IP telephony and remote access to enterprise applications. Unless you’re working in media or other fields where you need to move around huge files, bandwidth just isn’t an issue.
The reason we don’t have telemedicine? We do, only it’s restricted to a few areas where it really makes sense. Most patients - and most doctors - still prefer the face to face approach that has worked for thousands of years, and rightly so. Unless we all have full-immersion virtual reality suites in our homes telemedicine is always going to be fairly basic. But it has a role in very remote areas such as the Australian outback, where doctors are able to connect with distant medical facilities as needed to provide specialist advice.
The reason we don’t have remote learning in education? Again, we already do, and it’s expanding rapidly. It doesn’t require that much bandwidth to deliver video content, to allow for voice or other interaction between students and teachers, or to do many of the other things that are required to allow education to thrive. Again, a standard broadband connection available to the vast majority of the population is sufficient.
Sure, we should be aiming for faster speeds over time to allow things like delivery of HD video and faster transfer of large files and so on, but these are really incremental improvements at this point, and none of them are required to make the three things listed by the GigaOM author possible. Rather, cultural changes and an awareness of the benefits and possibilities associated with broadband will make the biggest difference. But let’s not make this yet another area where the government gets involved in a way which prejudices the way the market develops.
December 5th, 2008
There’s been a lot of hullaballoo about the BlackBerry Storm over the last couple of weeks. David Pogue, normally so mild mannered, used his print column to lambast the device from several different directions. Another example of the kind of critiques that have been going around is here.
Pogue’s column generated a fair amount of both commendation and condemnation according to his latest blog post, and understandably so. He seemed unusually vituperative about the device compared with his normal even handedness, and you sensed a certain amount of annoyance at the way Verizon Wireless refused to acknowledge the bugs in the device and that this annoyance might have colored the rest of his commentary. At the same time, many users (including me) seem to have experienced similar problems and he gave their frustration voice.
All in all, I agree with some of what Pogue said but don’t feel quite as strongly about it all as he did. I like a number of things about the device:
- the exterior is very attractive - both front and back - the black glassy finish over the front looks nice and sleek and the brushed metal finish on the battery cover adds further class. Feels more solid than the Curve and a number of other recent BlackBerries.
- The user interface is also attractive, although the default Verizon red is a little offputting. The new wireframe icons that debuted with the Bold and continued with the Flip are here too and look pretty good on the whole (although downloaded applications still use the same logos they always have, making them look out of place among the minimalist native ones)
- The email and other PIM functions BlackBerries are famous for are still first class.
But there are a number of problems with the device, too, and the main one is the implementation of the touch screen. I’ve never understood why anyone thought tactile feedback was a useful thing with a touchscreen. If tactile feedback is your thing, then you should really buy a device with a keyboard. If you like touchscreens you don’t get tactile feedback and that’s just fine. What does that tactile feedback do for you anyway? If you hit the wrong key on the virtual keyboard (or more likely in the Storm’s case, select the wrong item in a menu) the feedback is the same - the same clicky sound you’d have got if you hit the right key or selected the right menu item. The Sprint Instinct tried to solve the same perceived problem in a different way - with “haptic” feedback (little vibrations confirming virtual key presses) which was just as useless and also a little distracting.
RIM has made the mistake of assuming that people who want a touchscreen are actually closet QWERTY keyboard addicts. Even if they pretend they’re willing to forego the keyboard they really want a clicky feel afterall - they’re just in denial. No. They actually prefer the flexibility of a keyboard and have made a deliberate decision to do without the clicky keyboard, thank you very much. If I wanted both a touchscreen and a keyboard I’d have bought a Treo.
I had the same issues as David Pogue as regards using the virtual keyboard and the touchscreen in general. Coming from the iPhone (which is my main personal device) the two-layered touchscreen (selection via regular touch, action via hard push) was unintutive - I kept finding myself wondering why things weren’t happening after I had clearly touched the screen as indicated by the on-screen highlighting on the object touched. Admittedly, one would get used to this after a while, but it also takes considerably more effort to push the screen down to the point of clicking compared with other touch screens, which would get old quickly and tiring soon after.
Then there’s the portrait mode implementation of the virtual keyboard, where the device uses the Suretype keyboard layout instead of just a more tightly spaced QWERTY layout as the iPhone does. This is frustrating for those of us who don’t regularly use suretype or other predictive text keyboards. And using the keyboard in landscape mode takes up so much of the screen as to be useless too.
RIM should have realised that, in other areas too, other touchscreen phones - especially the iPhone - have now defined the expected user experience. In Google Maps and the web browser, multi-touch commands like pinching are now the norm on other devices, but not on the Storm. You double-click (as with the iPhone) to zoom, but have to hit the back button to zoom out again (never would have figured that one out on my own). As with the Bold, where this also annoyed me, even perfectly visible links can’t be clicked on until you’ve zoomed into the page - an issue you don’t have with the iPhone where precision finger clicking can be done when in full page view of a webpage.
The acceleromter-powered screen rotation is either much too slow or much too eager - taking ages to turn when you rotate the device very deliberately but constantly switching to landscape mode when you so much as look at the device at a different angle. I don’t know how RIM has managed to create both problems at once but they have.
I’ll stop complaining there - I actually like the device a lot, and a lot of its foibles just take some getting used to. But it really feels like RIM was making a device for reluctant touchscreen users instead of touchscreen enthusiasts, and as a result has rather handicapped what could have been a much more compelling device. Instead of trying to reinvent the full-screen touch device, it should have recognised that Apple defined that space with the iPhone, creating certain expectations, and that the best BlackBerry could do was match the iPhone for ease of use and design and improve on it with all the stuff BlackBerries do best. Instead of which, they’ve combined a sub-par interface with those BlackBerry goodies and come out behind the iPhone instead of in front of it, at least for this user.
November 25th, 2008
Scott Cleland of Precursor has posted a very interesting analysis of Google’s usage of bandwidth and the associated costs. He claims that Google is underpaying for its bandwidth by a factor of 21 based on a variety of calculations and estimates. The analysis is sound up to a point but it then makes the mistake of conflating two things that are really separate and don’t make much sense being treated the same. I posted a comment on his blog but since it hasn’t appeared (neither have any others) I’ve posted it here too.
In essence I think Scott’s doing a solid job of representing his clients - the telcos - but he also repeats a trope that began, I think, with Ed Whitacre - that Google is somehow using telco bandwidth for free when it should be paying for it. I use an analogy below to critique the analysis because this stuff is complex enough to benefit from it. Let me use another here to critique this idea that Google somehow ought to be paying its fair share. Say a store in a certain area suddenly starts doing great business, and customers are flocking to it on the local bus system. Would it be reasonable for the bus company to start charging the store to recoup some of its costs when all its customers are already paying the prices it has decided to charge in order to ride the bus? No. If it is unable to fund its costs from the prices currently being paid then it needs to charge more or seek ways to reduce its costs. The store isn’t the problem - in fact it’s doing good by creating more demand for the company’s services.
The telcos have no business asking Google to fund the costs of consumer broadband connections any more than the bus company has any right to ask the store owner to subsidise bus tickets. With that, I’ll leave you to the comment I posted on Scott’s blog.
You’ve done some very interesting and useful analysis here. Thank you for sharing it with us.
However, one criticism is that you conflate two things and treat them as if they were the same and part of the same category: namely, consumer broadband spending and service provider bandwidth spending. These two things happen at opposite ends of the internet value chain and are entirely separate.
In chart VI of your report you act as if consumer broadband and dial-up internet access spending and Google’s spending on bandwidth were the only chunks of money being spent on bandwidth/broadband in the US. This is, of course, not true. Google’s spending should properly be put in the context of overall service provider spending on bandwidth, not treated as part of consumer internet access spending.
Measuring Google’s spending as a proportion of consumer internet access spending is meaningless - it’s like asking how much it costs the Yankees to drive their players to the stadium as a fraction of how much it costs all the fans to get to the stadium. You’ll get a number of out that but it won’t mean anything.
I would suggest calculating how much Google pays for bandwidth as a portion of all the spending by service providers on bandwidth used to serve US consumers. Your numbers might be just as stark, but at least then you’d be measuring the right thing.
The study attempts to push a theory that AT&T under Ed Whitacre but also others among the broadband providers have attempted to push for some time, which is that consumers and Google and others should all just pay their “fair share” of the costs of the Internet. However, this simply isn’t the way free markets work: the fact is that there is a value chain and different players pay for different parts (as they do in any other free market).
Google pays less than it otherwise might because it has so many peering arrangements (entered into voluntarily by the various parties to them) which it doesn’t pay anything for. That’s the way the system works, and large broadband providers benefit from it too. AT&T, Verizon, Qwest and the cable companies are perfectly free to develop their own business models to compete with Google and are entirely within their rights to sign whatever agreements they want to. No-one is forcing them into anything. They can also charge their customers less or more if they think that will solve the problem. The real issue here is that bandwidth use is skyrocketing and broadband providers don’t want to pass the costs on to their customers, but those customers are causing the increase in costs and should rightfully pay for it.
I’m not a stooge for Google or the broadband providers (though the broadband providers are clients of mine) but I think this analysis needs some tweaks before it becomes really meaningful. Thanks again for some very interesting groundwork though.
Note: I’ve heard Scott argue against net neutrality at a couple of industry events and I think he actually makes some really good arguments (although I think there - as here - he sometimes overplays his hand). I have a lot of respect for the work he does and I’m grateful for the analysis he’s done here too.
Note 2: Google has posted its own critique / response here.
November 24th, 2008
I penned the piece below for our Straight Talk daily email the day after the US election. Since that time I’ve seen more and more articles springing up around this subject, some of them based on new news such as the appointments to the various Congressional committees overseeing aspects of telecoms. Some examples:
Here’s my piece from a few weeks ago. Also on the topic of regulation, the “5 things regulators can do to stimulate telecoms” I mentioned in this post will be published in the Straight Talk monthly publication in December - co-authored with Matthew Howett, who heads our regulation team.
What the US election means for telecoms
The Dow Jones stock index dropped around 5% on Wednesday in an apparent response to the election of Barack Obama as the next president of the United States. Some investors fear an increase in regulation and taxation and a negative impact on businesses under an Obama administration. Telecoms operators should start thinking about what an Obama presidency will mean for their businesses too.
A deregulatory FCC administration comes to a close
The accepted wisdom appears to be that the Chairman of the FCC, Kevin Martin, will step down following the election of Barack Obama, and will likely be replaced by a Democrat nominated by the incoming president. This will shift the balance at the FCC from a 3-2 Republican majority to a 3-2 Democratic majority for the first time in eight years. As the terms of the other commissioners expire over the next few years there may be further changes in the composition of the commission.
The Martin FCC and the regime of his predecessor, Michael Powell, have taken a largely deregulatory, hands-off approach to the US telecoms sector. They have given the green light to large mergers such as Deutsche Telekom’s acquisition of Voicestream, SBC’s acquisition of AT&T and AT&T’s subsequent acquisitions of BellSouth and Cingular, Verizon’s acquisition of MCI, and just this week the acquisition of Alltel by Verizon Wireless and the merger of Sprint’s WiMAX assets with Clearwire. As such they have presided over a significant thinning of the major players in the US market, leaving four major wireless operators and three major wireline carriers (with Verizon and AT&T making up two of the members of both camps).
At the same time, regulations have been rolled back in a number of areas, reducing the reporting and network access requirements imposed on the RBOCs and focusing on facilities based competition as the preferred alternative to regulation-dependent, service based competition. This has resulted in an effective duopoly between cable companies and telcos in the consumer market and an oligopoly in the large enterprise market, with only the small and medium sized business market seeing a significant number of competitors.
Larger operators likely to suffer most, but broader repercussions likely
The incoming FCC is likely to take a different approach, much more sceptical of further concentration of market power in the hands of a small number of players, and much less likely to lift regulation. Indeed, it is also much more likely than the outgoing administration to finally tackle the issue of net neutrality decisively, something the Martin administration dodged for a long time and then handled only half-heartedly earlier this year.
As such, the large operators which have done so well under President Bush are likely to find life rather harder under President Obama, while smaller players and consumer groups are much more likely to have their voices heard. The change at the FCC is likely to be echoed in other government institutions too, such as the Federal Trade Commission and the Department of Justice, both of which have roles to play in regulation competition and merger activity in the telecoms sector too.
On top of all this, broader changes in the government’s approach to regulating business will impact the telecom sector too. The Obama adminstration’s likely focus on environmental issues may lead to more stringent emissions standards, for example, something which hits telcos with their large fleets of specialised vehicles particularly hard. Tax rates on corporations may have to be raised to pay for some of the income tax cuts and increases in spending proposed by the Obama campaign.
At the same time, the drive towards sustainability should also provide opportunities for telcos, which stand to gain from efforts to substitute telecommuting, TelePresence, telemedicine and other innovations for their less carbon friendly current incarnations. If the Obama campaign makes good on its promise to invest in clean energy and other technology to reduce emissions, telcos may be the beneficiaries of some of this spending too.
Larger telcos are likely to feel the impact of the change in adminstrations more than their smaller brethren, but all telcos are likely to have to make some adjustments and concessions under the new regime.
November 3rd, 2008
Google is famous for its “uncluttered design” especially as regards the Google home (search) page. Well, yes, we’ll give them that. Not hard to be better than Yahoo! in this regard given that Yahoo! was a directory first and a search engine second, and in between had become a bloated all things to all people portal.
But there are some things Google really doesn’t do well, or at least could do much better in relation to design, and also in relation to the features of some of its core products. Here are ten of them, from a purely personal perspective as a user of these products:
1. OK - they finally gave us themes this past week. But why the heck did it take so long? And why were users limited to third party browser add-ons to achieve this effect? How hard could it be? But more importantly than themes (I’m using Shiny these days, by the way), is the design itself. So this one is more of a past peeve than a current one, but is reflective of how long it takes Google to get some of the basics in place. And I still can’t pick the colors of individual theme elements myself - I have to go with a complete package (pretty though they are).
2. Why should it take me two clicks (or more often one click, a scroll and a click) to file a message in a folder (sorry, under a “tag”)? I have the list of tags in my left navbar anyway - why not just let me drag the message there, as I can in any desktop email program and in Yahoo! Mail and Hotmail (or Live Mail, or whatever it’s called these days)? Are you worried that the extra page weight will slow the app down? Let me choose! You already give me an option to use the “Older version” and an option to use the HTML only version on a slow connection.
3. Why can’t I decide once and for all what font I want to write in, and have multiple signatures? Have you just assumed that if I’m serious about this stuff I’ll use a desktop client via IMAP? Why would I do that? The way you’ve implemented IMAP with tags and folders it screws up my list of folders every time I try to do it - I get three different trash folders and no easy way to manage archiving items… Again, how hard could it be to implement basic email templating and a signature picklist?
4. Why is it that you can remember locations I’ve typed in to the search bar and auto-suggest them when I’m typing but I can’t easily retrieve that list later? And then your “My Maps” feature is entirely separate? Can’t you integrate the two, and let me easily view all the locations I’ve previously either typed in or saved under My Maps in one easy list? You might allow me to sort that list by geography, or history, or by various tags I might have applied (if you let me do that). The way things are now, I’m forced to remember some element of an address to get it to pop up again in the auto-suggest list.
5. Secondly, why can’t you do a simple integration between Google Maps on the desktop and Google Maps for Mobile? I’ve been wondering this ever since I started using Google Maps on my BlackBerry and it’s still a bugbear on my iPhone. Why not allow me to access both my “My Maps” locations and recently searched locations from my desktop on my phone, and vice versa? I’m happy to log into my account in order to do this. Your friends at Yahoo! figured out how to do it long ago and it really can’t be that hard. After all, how likely am I to have my desktop/laptop PC open in front of me with a wireless connection to the Internet as I’m trying to follow those directions I looked up, compared with how likely I am to have my phone with me? And how about a “send to mobile” option so I could send myself an SMS with a link that will open in the Maps app or a browser on whatever mobile device I’m using?
6. My main frustration with Google Reader is that I have a lot of my own direct subscriptions but also several subscriptions to other people’s shared items. Because there’s a fair amount of overlap in coverage areas between these various feeds, I often find that an item that is in one of my direct subscriptions also shows up in one or more of the shared items feeds. It’s possible that I’ll sometimes see the same item directly, again in the TechMeme feed, and then two more times in shared items. Although the TechMeme one is hard to solve without a bit more cleverness, it should be straightforward to implement a filter to allow me to just see the item once (with appropriate annotations to indicate it was also in shared items - perhaps along the lines of FriendFeed’s recently added Related Items feature which I really like). I’m fine with it appearing in each of the appropriate folders so I come across it sooner rather than later, but once I’ve read it once, mark it as read everywhere else too. Please?
7. Then let me filter out stuff I’m not interested in. I subscribe to Engadget Mobile, but what if I’m bored about all the stories about the G1 phone? Why can’t I request that Google Reader automatically mark all stories as read in that feed if they mention the G1? Give me filters with some granularity to do this effectively so I can automatically discard things I know I’m not interested in.
8. Then add filters to move items into a special priority folder if they mention keywords I’m particularly interested in, so I can read those before I trawl through the rest.
9. Lastly, let me find features a lot more quickly and easily. Several times now I’ve had to go to a Google web search (ironically) to figure out how to get a Google Reader Shared Items widget for my blog. I shouldn’t have to do this. First, you call it a “clip” instead of a widget, which means I can’t find it using your Help search function. Not helpful. But then you bury it in a totally unintuitive section of the Reader settings. Instead of simply putting that option on the Shared Items page, where it belongs, I have to go and look under Tags and Folders. Now, there’s a reason for that - I might theoretically want to get widgets (sorry, clips) for specific tags or folders as well so you want that feature option there - fair enough. But put two links then - one under shared items (which is the logical place) and one under Tags and Folders.
10. Again, it’s a question of helping me find features / functions by putting them in a logical place. I want to be able to set whether or not Google Calendar automatically creates a reminder for new calendar items, and if so what the characteristics are. So where do I go? Settings, right? But no, it’s not there. There’s no sign of it there. So I go into a calendar item and find the reminder section. Is there a link there to tell me where to change this setting? No. So I go to the Help function and it tells me that to change this setting I need to click on the tiny arrow next to the name of a specific calendar in the left navbar and then select Notifications (not Reminders, but Notifications, despite the fact that in individual appointments they’re referred to as reminders). Then I can finally set default settings. Why on earth is this so hard to find? Why not just have it under settings where any sane person will look for it? I realize that people might want to set this differently for their different calendars, but this is the default behavior even if you only have one calendar. And what if I want the same behavior for all my calendars? Couldn’t you at least have a link under Settings?
First, I sound like a grumpy old man. I’m not old or grumpy, as it happens, but these are things that repeatedly irk me when I use Google products.
Secondly, as will have become clear from the above, I still use Google products a great deal - Google Reader is my default feed reader, Google Maps is my default mapping provider, and Gmail is where I get my personal email. I also use Google Calendar to track some personal calendar items. So they have me hooked regardless of these shortcomings. Clearly, they’re doing a lot right.
Lastly, some of these will come down to personal preferences - some people may love the way these things work at the moment and some will agree with me. But my plea is partly for more choices - let me choose, and if in doubt provide a link in two different places so I can find something quickly instead of having to hunt around your Help function (or worse, a web search) to find what I’m looking for.
October 21st, 2008
I got a letter in the mail today from Optimum / Cablevision, our friendly local cableco. We were Cablevision customers when we first moved to the area because Verizon’s FiOS service, which we had known and loved in Boston before we moved, was not available yet. We had broadband and TV from them and phone from Verizon. When FiOS duly arrived in our area, we signed up for the triple play from Verizon and ditched Cablevision without further thought.
Since that time we’ve received the odd piece of mail from Cablevision (”some FiOS customers are not getting what they thought they signed up for”, “wish you hadn’t switched?” etc.) and the other day a Brazilian guy doing door to door sales for the company showed up too.
But the letter that arrived today was a bit different. It appears Cablevision is tired of my refusal to come back to them and has decided to start with the scare tactics instead… I’ve embedded a small version of the letter below but click here for a large image and here for a PDF (both are scans of the original).
My wife brought me the letter in my office and just laughed about it. “Oh no, our house is going to burn down because of our FiOS service!” she said. She saw it for what it was - scare tactics, pure and simple. But would other customers? Would this work? And are they sending this to everyone, or just people like me that have said no too many times to the straight pitch?
Here’s the full text:
New York Public Safety Commission Inspectors have found that “a high proportion” - over 50% - of Verizon’s FiOS installations in customer homes had failed to adhere to some of the bonding and grounding provisions of the National Electrical Code (NEC); the Commission has ordered Verizon to undertake a comprehensive remedial plan. The NY State Attorney General’s office - supporting the Commission’s action - had also noted that many customers were unaware of the potential risks involved in these faulty installations.
State inspectors first found grounding problems in the spring of 2006. They discovered that some FiOS equipment - Optical Network Terminals, or ONTs - had been grounded to heating fuel-vent pipes and plastic pipe elbows, or were not grounded at all. The PSC report noted that improperly grounded electrical equipment can cause fires or electrocution in the event of equipment failure or lightning strikes. PSC inspectors found similar problems in a series of subsequent audits through the summer of 2008. Although Verizon has now improved its code compliance on new installations, a significant number of faulty installations still remain and, under Verizon’s plan, might not be fixed until May 2009. Verizon customers’ FiOS installations in areas that have yet to be addressed are still at risk for these bonding and grounding faults.
As an alternative, Cablevision offers our popular Optimum Triple Play which includes TV, High Speed Internet and Unlimited Calling in the US, Puerto Rico and Canada [why they think I need to know about Puerto Rico and Canada, or any area outside of the New York area, I have no idea] for just $29.95 a month each for two years with FREE professional installation. And there are no annual contracts.
For more information or to order the Optimum Triple Play, please call 1-866-***-****. Our sales representatives will be happy to help you, 7 days a week, 7am-midnight.
Note, no mention of them actually removing Verizon’s ONT, so one assumes this actually does nothing to solve the underlying problem, should there be one in the first place. Since Verizon’s fiber installations are permanent (i.e. no going back to copper) I don’t believe Verizon would ever take it away anyway, so by cancelling I’m probably even less likely to have them come and rectify the issue than if I at least remained a customer…
There is news today that Sprint has made a massive leap forward in its customer care operation and has gone from being a real laggard in this area to being top dog in the US - at least on one key metric: wait time before reaching a human being (once through the IVR and into the call center queue). According to a survey from Pali Research (irritatingly, registration required for that link - but a good summary here):
We recently concluded our 6th survey of wireless customer care response times and Sprint has leapt to the best performance of its peers from the worst in our first survey 2.5 years ago… Sprint’s survey results of 91% in Q3 2008 soundly beat its peers: AT&T Wireless - 33%, T-Mobile - 43%, and Verizon - 85%.
I think this is incredibly impressive - Sprint has hardly been a paragon of good performance in the wirelessarena lately, and has had one or two other major things to worry about recently too. But it made customer care a major focus area when Dan Hesse took over (see this earlier post) and the results are kicking in. This is one timely demonstration of the point that I made in my previous post on the topic of customer care at telcos, that fundamentals need to improve dramatically in this area. Kudos to Sprint for fixing this key element of customer service so quickly.
Having said that, this is just one metric. It doesn’t measure customer satisfaction, first call resolution, or the volume of calls to care in the first place (another area where Sprint was until recently also the laggard among its peers) - it only measures time to answer - admittedly, an important element but also an easy one to fix if enough resources and money are thrown at the problem. I’ll be watching with interest as other surveys and reports on the other elements of telco customer care are released in the coming months to see if Sprint’s efforts in those other areas have paid off too.