We have reached the point where the benefits of communication are being outweighed by a dispiriting loss of production.
This was confirmed by a Microsoft report last month that found workers around the world are struggling to keep up with a “crush of data, information and always-on communications”.
The research showed people are spending 57 per cent of their workday on email, meetings and other communication but just 43 per cent on productive creation.
I worry that the solution to this view of the problem actually makes things worse. In my own area of work, there is a constant push—for example—to replace written reports with online ‘dashboards.’ This would, no doubt, shift the classification of the work from being ‘communication’ to something ‘productive’, even though the actual task that is being accomplished is the same thing—just often less efficiently, because dashboards often lack clear commentary and so require lots of people to consider data separately to reach the same conclusion. The communication becomes less efficient, but feels more ‘productive’.
I think the “crush of data” is the bigger problem than the deluge of emails. We’ve reached a strange point where people have concluded that data is transparency, whereas it is often actually obfuscation. I can, in no time at all, produce statistics on the number of notified cases of certain infectious diseases. But this explains very little: declining cases might be a ‘bad thing’ if they are likely to reflect poor access to healthcare or a problem with testing. Increasing cases might be a ‘good thing’ if they reflect work done to target high-risk populations. A dashboard is often much less helpful than an explanatory paragraph, even if one of those things looks ‘productive’ and the other looks like ‘communication’.
The image at the top of this post was generated by Midjourney.
Concern seems to be escalating about the ability of artificial intelligence tools to manipulate or generate misleading images and videos, and the potential for those creations to spread misinformation. Examples of potential alarming scenarios being reported based on this sort of stuff are emerging with increasing frequency.
Nevertheless, I’m reasonably relaxed about this. I believe—perhaps naively—that we are merely witnessing the latest stage of a long period of refining our relationship with visual media.
Our relationship with photos and videos remains quite new, and is constantly evolving. The principle of ‘seeing is believing’, in photographic terms, is a fairly new concept—and has probably never quite been true.
In the fairly recent past, photos and videos were a rare commodity. Television broadcasting of the proceedings in the House of Commons began only within my lifetime. Easy access to digital photography, including among professionals, is an even more recent development. Full-colour newspaper supplements, brimming with photographs, also materialised during my lifespan. In fact, I was attending university before the first full-colour newspapers appeared.
The rise of Photoshop made us doubt the veracity of photographs we were seeing. The Guardian initially banned digitally altered photographs, and amended its Editorial Code in 2011 to allow them only if explicitly labelled as such.
The advent of mobile photography and social media revolutionised our relationship with photography all over again, as Nathan Jurgenson has compellingly argued. Jurgenson’s work is especially apposite here, in fact, as he argues that photographs rarely stand alone these days, but form part of larger conversations. A faked image has less impact in that context.
There’s an argument that in rapidly developing situations, fake photos can have an outsized impact. But that is only true while we are in the temporary, short-lived, and rapidly eroding cultural moment when we assume photographs are accurate representations of events.
Increasingly convincing fake imagery simply teaches us to be more sceptical of what we think we are seeing. It’s another step down a road leading us away from a very recent, and very temporary, and very limited assumption that imagery can tell an accurate story.
Just as we’ve all learned over millennia to be sceptical of hearsay in the heat of the moment, so we’ll come to be sceptical that photos show a true picture.
We’re in a uniquely precarious cultural moment right now, while we’re still coming to terms with this change. But long term, this all simply leads us to fall back on the things we’ve always fallen back on.
We trust the sources we trust because we judge them to be trustworthy. Shortcuts to that conclusion—like the implied authority of appearing on television or having a ‘verified’ status on social media—unfailingly turn out to be temporary and misleading. We don’t believe news stories solely because of the pictures—there often aren’t pictures.
Part of being human is working out who and what is trustworthy. We get it wrong sometimes, and that’s part of being human too. But it’s a skilled honed over the whole of evolution, and we’re pretty darn good at it—photos or no photos.
The images in this post were generated by Midjourney.
My life runs on Evernote. It allows me to appear far more organised than I actually am. If anyone ever asks if I have a copy of something, I almost always know that the answer is ‘yes,’ and that I can find it in seconds with a simple search.
I stopped using Evernote many years ago, but I haven’t forgotten what it taught me. Evernote taught me the value of search over filing. I rarely file anything these days because search tools are generally too powerful to bother.
I realise in retrospect that much of the way I used Evernote–despite the above quote–was filing. For example, I would tend to have notes that combined emails and documents about a single topic. I wouldn’t bother with that nowadays: why bother to put all of that together in a single note when I could just search for the bits I need—usually emails—in their original location?
These days, I use OneNote—Evernote’s major competitor—the same way as I used to use a paper notebook: one ‘note’ per day. Anything I need to jot down during the day goes on the note—not in some separate topic-dedicated special folder. It doesn’t need to be filed because it is searchable. The only reason I use OneNote instead of Evernote is because that’s what my employers provide.
Evernote may not have stayed around in my life for long, but it’s clearly had a lasting impact on how I work.
At the time, I used a number of Google service and concluded that I’d happily pay a small fee to use them each month. Five years on, much has changed. I wouldn’t pay to use Google’s services today, and barely use any of them. I thought it worth setting down some thoughts as to why.
Fundamentally, at some point in the last five years—I can’t pinpoint exactly when—Google ‘crossed the creepy line’ for me. Instead of feeling delighted by how the company anticipated some of my needs, I began to feel a little stalked. Worse than that, I had the distinct feeling that Google was increasingly trapping me in a filter bubble, serving up only recommendations and search results that aligned with my preconceived ideas, and filtering out anything that might have challenged me. I began to feel a bit weirded out as Google’s analytics and adverts would follow me around the web. In Love Island parlance, Google was giving me ‘the ick.’
There was no great epiphany; over time I just drifted away from Google’s services.
Five years ago, I’d already mostly moved my search activity from Google to Bing; these days, my default search engine is DuckDuckGo. And, to allow you a peek behind the curtain for a moment, I actually had to go and check that to write this paragraph. When using a browser, I search from the address bar, so I’m not used to typing in a search engine’s URL. I’ve used a variety of providers over time: sometimes I like to use Ecosia because planting trees makes me feel good. Occasionally, I like to use Neeva because I like their ad-free approach. But mostly: I don’t give it a great deal of thought. There’s no significant difference in the quality of results as far as I can see.
When I wrote the previous post, I used Gmail. No more, not least because the web interface has become a bloated mess. I used Proton Mail for a while, but the tight security of the service cost me in convenience, and I ended up moving to and sticking with Fastmail, which has also replaced my use of Google Calendar. I thought that moving my email archive would be a pain, but it was elementary with Fastmail—so simple that I can’t even remember doing it.
Half a decade ago, I said I’d ‘definitely’ pay for Google Maps. These days, I’d undoubtedly pay to avoid using it for most purposes: it seems to have the highest number of junky, inaccurate points of interest of any service I use. For simple navigation, I tend to use Apple Maps—and also often end up using Apple Maps for default on the web because it is built into DuckDuckGo.
I had forgotten that I ever used Google Drive or Google Photos. I use iCloud for storing personal documents, and a variety of cloud services for saving photos, my hope being that at least one of them will stick around for the long term. Likewise, I don’t have a Chromebook, and my default browser is Safari. I do use the Chrome browser at work, but only because the installation of Edge is so locked down on my laptop as to prevent me from using a password manager extension—and Chrome is the only offered alternative.
Which just leaves YouTube. I’m not a frequent user of YouTube for the simple reason that I don’t watch much short-form video, but it is my go-to service for that purpose. I don’t use it enough to have the app on any of my devices. The website is profoundly irritating with its endless ads and trickily worded promos for subscriptions.1 I’d prefer to use an alternative if I knew of one.
I’m also struck that three of the five ‘newer’ Google services mentioned in that post—Allo, Duo and Now—have already closed, underlining the danger of integrating any new Google services into any part of one’s daily life or workflow.
I wouldn’t pay to use Google’s services nowadays: I wouldn’t even be tempted if they paid me, like Bing Rewards. Yet, it strikes me that I pay for many products where Google offers approximate equivalents for free (Fastmail, Neeva, iCloud, photo storage). As I said last time: ‘I am only one person, and I’ve no idea how typical I am in this context, but I wonder if my change in behaviour represents a wider portentous shift for Google’s fortunes?’
The picture at the top of this post is an AI-generated image created by OpenAI’s DALL-E 2.
Asking users to click ‘skip trial’—implying immediate payment—to dismiss subscription ads is clearly intended to confuse. ↩
it does all lead me to wonder if the tide is turning, and whether by this time next year we’ll still see the same constant exhortations by television and radio programmes to follow their people on Twitter and to Tweet out our opinions. Maybe enthusiasm really is waning.
This was based on the toxic nature of the site and the strong evidence around its impact on mental health. And this was six months or so before Elon Musk took control of the site, a development that can’t be said to have improved either of these factors.
If anything, the TV exhortations to use Twitter have only increased. Most bulletins on the BBC News Channel now feature the presenter asking the viewer to follow them on Twitter. When I was growing up, it would have been unimaginable for a BBC newsreader to actively promote a non-BBC service from behind the desk. Now they do it almost hourly, and for free, for a service that seems to cause the BBC endless trouble.
We live in strange times.
The picture at the top of this post is an AI-generated image created by OpenAI’s DALL-E 2.
Today is April Fools’ Day, and so some scepticism about the world is probably warranted. But cynicism is probably best avoided as much today as everyday, even if we all sometimes fall into it.
My iPhone 12 Mini is 25 months old. While Wendy and I were nursing lattes in a local deli recently, I spotted an ‘important message’ telling me that the phone’s battery had ‘significantly degraded’ and my heart sank.
The cynic in me decided that it was no coincidence that this had happened just after the warranty had expired. Surely, this was a ploy to get me to upgrade? But I don’t want a massive phone, so I knew without really checking that the latest and greatest iPhone 14 series wasn’t for me—it doesn’t have a ‘mini’ model.
And so, I resigned myself to taking my phone to the Apple Store and parting with £100 or so for a new battery—and having the hassle of them wiping my phone and having to restore a backup. I assumed that would play havoc with my eSIM, and that I’d be spending an afternoon trying to reverify Apple Pay cards.
I was entirely wrong. I scheduled an appointment at the Apple Store, and they replaced the battery in about an hour, for free, without wiping the phone. They didn’t even take my screen protector off. It really wasn’t any hassle at all, and the staff couldn’t have been friendlier.
Sometimes, tech support can be a pain, and even Apple doesn’t get it right all the time. But when they do, the experience can be superb.
When I was seven years old, I distinctly remember believing that talc was the hardest substance on Earth. I obviously knew that talcum powder felt soft, but convinced myself that the individual bits of talc must be really, really hard.
I reached this conclusion after reading that talc was ‘number one’ on the Mohs hardness scale. Because this was referenced essentially in isolation, I understood ‘number one’ on a ‘hardness scale’ to mean ‘hardest.’ Obviously, I now know that it’s the softest mineral on a 10-point scale.
I’ve been ruminating on this over the past few days: my problem was essentially misinterpreting language as a result of not understanding the wider context. This is also a danger with language prediction models like ChatGPT: they too have ingested a load of text and are—in essence—quoting it back at people, while making basic statistical inferences along the way.
What these models try to do is simply predict the next word, based on statistical analysis of lots of prior text. They are a better-trained cousin of predictive text on your phone; they are not HAL 9000. They don’t have any understanding or general intelligence.
Exposing ChatGPT’s lack of understanding is trivial. For example, from my own messing about:
Which is taller: a 100cm giraffe or a 1m mouse?
A 100cm giraffe is taller than a 1m mouse. Giraffes are known for their long necks and are among the tallest mammals in the world, with adult males growing to an average height of about 5.5 to 6 meters (18 to 20 feet). In comparison, the average height of a mouse is much smaller, usually around 10 to 20 cm (4 to 8 inches).
So, a 100cm giraffe would certainly be taller than a 1m mouse!
And also this:
Would it cost more to buy 6 £1 eggs from Waitrose or Aldi?
As of my knowledge cut off in 2021, the cost of eggs can vary between different stores.
However, Aldi is known for offering lower prices on a wide range of products, including groceries, compared to more premium stores like Waitrose.
So, it is likely that it would cost less to buy 6 £1 eggs from Aldi than from Waitrose.
And also this:
What is hotter: a drawing of an ice cube, which is kept in the freezer, or a drawing of the sun, which is baking in an oven?
Even though both the drawing of an ice cube and the drawing of the sun are subjected to different environmental conditions, such as being kept in a freezer or being baked in an oven, they still cannot be described as hot or cold as they are just illustrations and do not have physical temperatures.
And also like this:
How many times does ‘times’ appear in this question?
The word ‘times’ appears only once in the question you just asked.
People like asking ChatGPT to write emails and speeches, and are impressed with results that are—at heart—just tweaked combined quotes from a wealth of written material. That’s fine, and those responses might well be impressively useful: ChatGPT has read many more Best Man speeches than any of us ever will, so has a better idea of what word ought to come next.
However, people can easily be fooled by these responses into assigning ChatGPT human-like intelligence. This might lead them to ask models like this to undertake real-world high-risk tasks without appropriate supervision. My background means that I automatically worry about their use in medicine. Some of these uses are obvious, like providing basic medical advice, and ChatGPT in particular has some safeguards around this.
Others are not obvious: people asking these models to summarise long medical documents, or to distil patient histories into problem lists. These are problematic because they lie on the border between ‘text analysis’—at which these models excel—and ‘real-world interpretation,’ at which they comprehensively suck, but can have a sheen of competence.
Much of the overhyped discussion about ChatGPT seems to be confusing this language model for something approach artificial general intelligence. To me, it feels a lot like the advent of Siri and Alexa, with wild predictions that PCs would disappear and voice assistants would be everywhere. People really thought that their voice assistants understood their requests and had personalities—but the novelty has long-since worn off. I fear we’ve got a lot more not-very-funny ‘I asked ChatGPT…’ anecdotes still to live through, though, just as we endured ‘I asked Alexa…’ anecdotes long after they stopped being funny or insightful.
Like voice assistants, language modes are useful and will no doubt find a place in everyday use. And like voice assistants, that place won’t be nearly as central to our everyday experience as the early hype suggests, and nor will it be quite where we currently expect it to be.
And research towards artificial general intelligence will proceed apace—but honestly, I think it’s a stretch to say even that ChatGPT is a significant staging post on that journey.
The picture at the top of this post is an AI-generated image for the prompt ‘digital art of a robot in a bathroom applying talcum powder’ created by OpenAI’s DALL-E 2.
Over the course of the last six months or so, I’ve gradually drifted away from Facebook, Twitter and Instagram. In the last few weeks, I’ve deleted my accounts. This feels oddly transgressive, and friends who noticed have responded with mild alarm and a single universal question: ‘Why?’
There is no straightforward answer: it’s a complex web of emotional, social and rational reasons rather than a logically constructed ‘position’. Nevertheless, I thought it might be interesting to scribble down some thoughts on the topic.
I joined Facebook in (I think) 2005. I was a student at the time and so able to join as soon as Facebook ‘launched’ at my university. It connected me with my friends and solved a genuine problem of how best to share things like photos from events.
A year later, Facebook opened to anyone with an email address, and schoolfriends and relatives quickly found their way onto my ‘friends’ list. I appreciated this passive approach to gaining insight into the lives of others, and I felt a genuine connection with people I hadn’t seen for years.
I was also a relatively early-adopter of Twitter, signing up in around 2007. I used the service for a variety of purposes over the years, including ‘micro-blogging’ – posting standalone tweets which I also cross-posted to this very website. Over the years, I built up a collection of interesting people who I ‘followed’, and enjoyed debating and sparring with them.
I also started to use Twitter for work purposes, promoting events I’d been involved in, tweeting about conferences I was attending, and that sort of thing. The service became something of a professional networking platform for me.
I was not an early adopter of Instagram. I don’t think I ever worked out how to get the most out of that service: I ‘followed’ people I knew in real life, and enjoyed seeing their photos. I also followed a few accounts which posted beautiful travel photos because they made me dream of summer holidays.
It didn’t take me too long to realise that Instagram wasn’t for me. I’d been posting for a year or so when I gradually drifted away from the service, and eventually stopped opening the app. There was no conscious decision behind that behaviour.
Reflecting on it, my Instagram feed seemed to me to have a single emotional note: joy. There’s a Glenn Slater lyric about ‘forcing you to feel more joy than you can bear,’ and that’s what Instagram did for me. It’s wearing to browse a world where everyone presents as constantly delighted.
I don’t want to live in a world of constant ecstasy; perhaps I’m a grumpy git, but I need a bit of shade to better appreciate the light. And I suppose the same is true of my social media feeds.
The advertising on Instagram also served to undermine the emotion in a perverse way. Artfully taken pictures of crappy products undermine the sentimentality and emotional pull of the service. Putting a beautiful picture of a terrible product alongside a beautiful picture of a magnificent vista undermines the latter. And Instagram ‘influencers’ trying to shill crap always left a nasty taste.
At heart, the service just didn’t make me feel happy anymore, and I drifted away. I didn’t delete my account, but I did eventually delete the app.
It was Brexit that pushed me off Twitter.
Twitter has always had the capacity to facilitate unproductive tribal debate. The format is part of the reason: it isn’t possible to develop a sensible argument in 140 (or even 280) characters. Debates rage about individual word choices while wider context is missed.
The Twitter community also has an inflated sense of its own representativeness and importance. People think their Twitter bubble reflects the ‘general view’ of the world and become enraged and upset when reality conflicts with that perception. Conspiracy theories abound, the ‘mainstream media’ gets pilloried for reflecting mainstream opinion rather than that of the ‘twitterati’, and the whole community frequently becomes angry.
Anger drove me away from Twitter: not the anger of others, but my own anger. I’d habitually open my Twitter feed on my phone from time to time and noted that it always made me feel angry. I could be annoyed at a story of injustice that would otherwise have passed me by; frusrtated by someone’s absurd perversion of a news article or viewpoint; or angry at myself for hypocritically thinking less of someone for ranting on Twitter.
Once, I habitually opened the Twitter app while strolling along the promenade in Nice on a beautifully sunny day. There was a lot in my feed about Brexit, including some ‘real life’ friends espousing extreme positions and abusing politicians. The angry mob raised my dander, and I fired off a tweet about this being the first and only time I’d ever get to wander around this beautiful city as an EU citizen.
As I walked on, I reflected: I’d felt relaxed before I opened Twitter; now I was mildly stressed. I’d posted what amounted to a pointless rant, and just contributed to the collected unhealthy rage. I opened Twitter again, deleted the tweet and deleted the app.
It was covid that drove me off Facebook.
My feed became clogged with covid posts, many of them factually wrong, many angry and many seemingly calculated to generate fear. I felt that the time had come for a break from Facebook, and I deleted the app, intending for this to be temporary.
I had temporarily stopped looking at Facebook for periods before: it becomes a pretty awful place in the run up to elections, for example, and I’d tended to opt out.
The difference this time was that I realised that I hadn’t missed it. I had thought that I enjoyed keeping up to date with the antics of schoolfriends and others I haven’t seen in decades, but I came to realise that frankly, my dear, I didn’t give a damn. I’m just not that bothered about the minutiae of the lives of people I would probably no longer recognise on the street.
I was no worse off for not knowing the ins-and-outs of someone’s frustration with the covid one-way system in Tesco, or that the child of someone I barely know has drawn a picture of the virus, or that an acquaintance’s neighbour didn’t join the Clap for Carers. Conversations about these sorts of things are far richer than seeing them written on a screen could ever be. And seeing them baldly written on screen brought out a slightly judging side of my personality of which I’m not terribly fond.
I decided to make my absence a little more permanent. I initially prolonged my period of abstinence, but then came to worry that I might be notable by my absence. What if friends were ‘tagging’ me in posts and I was appearing to ignore them? This didn’t seem fair. And so, I decided to ‘deactivate’ my account.
‘Deactivating’ an account is what one must do on Facebook to keep using Facebook Messenger; it contrasts with ‘deleting’ an account, which removes all data and prevents a person from using any Facebook-badged services. There is no-one I speak to exclusively on Facebook Messenger, so I did ponder for a while whether to delete my account altogether given Facebook’s appalling privacy record. But I reflected that I use other Facebook services such as WhatsApp, so why create hassle for myself? ‘Deactivation’ was for me.
Except, my account mysteriously kept ‘reactivating’, and in a fit of pique when logging on to deactivate again, I got fed up and decided that account deletion was for me after all. I clicked the button… and then a seemingly endless parade of further confirmatory buttons.
I haven’t missed it since.
Oddly, covid briefly drove me back to Twitter. Social distancing’s ability to cancel meetings meant that I was missing my profession network, and I thought that engaging via Twitter might be a good idea. It didn’t work out well.
I engaged in a casual conversation with some microbiology colleagues about a small detail of some guidance with my employer’s logo on it, trying to understand the virological basis behind it. This is exactly the sort of ‘corridor’ conversation I would have in person all the time. It turned out that the guidance was wrong, and some colleagues were, I think, mildly annoyed that I’d had a public conversation about this.
I thought: what’s the point? Better to have conversations away from the febrile atmosphere of Twitter, where anything might end up offending people at any given moment. And so I disengaged again.
It took a long time for me to come to the decision to delete my Twitter account. I knew I didn’t want to use it for work or personal purposes, but I did auto-post to Twitter frequently. For example, my blog posts and Goodreads reviews usually auto-posted, and could lead to some interesting discussions both in person and online.
The problem was the same as for Facebook: what if my lack of attention to ‘mentions’ and messages were taken as a slight?
I initially changed my account name to include the words ‘unmonitored account’ and updated my ‘bio’ to say that I no longer used Twitter. But then, I came to reflect that I’m not self-obsessed enough to truly believe that people want to see my stuff auto-posted despite me not engaging with the service. I decided to delete my account altogether.
I have been surprised by how few people have even noticed my absence on these platforms, or at least asked me about it in person. Even members of my own family haven’t noticed that I’m no longer around on these services. The only time it has come up is when people have asked why they can’t tag me in posts.
I don’t think it has had any real impact on my own life with the exception of removing a complication. I have noticed that I have slightly richer social conversations, because when ‘catching up’ with people, I haven’t already derived most of what they’re telling me from online feeds: but I don’t know whether the other party feels the same way, or whether this is a biased judgement.
Contrary to much that is written on this topic, leaving these services has not ‘changed my life’ for good or ill. I suspect I’ll re-join these services or their successors in years to come. But the sort of social media offered by these three service is not for me right now. And I’m content with that.
Image credits: The image at the top of the post, showing a mobile phone with the Facebook logo scored out, was posted to Flickr by bookcatalog and is re-used here under its Creative Commons licence. The second image is an edited screen capture of the Facebook homepage, made by me. The third image is a version of an image posted to Flickr by TT Marketing, which I have modified and re-used here under its Creative Commons licence. The fourth image is a version of a picture posted to Flickr by Cambodia4kidsorg, which I have modified and re-used under its Creative Commons licence. The fifth is another of bookcatalog’s pictures, modified and re-used under its Creative Commons licence. The seventh is modified from an image posted by hedera baltica, re-used under its Creative Commons licence. The eighth and final image, which shows a crater on Mars, was originally posted by mariagat mariagat and has, again, been modified and re-used under its Creative Commons licence.
This was one of those interviews which is sort of interesting but doesn’t really say much. Though I was quite taken with this description of Apple’s canteen where the cutlery is hidden from view in an illuminating example of form over function:
You can’t tell what the chefs are cooking because there are no menus on display (the options are on your phone if you’re an employee). You don’t seem to be able to pay cash for anything and there are no sauce sachets or eating utensils to be seen unless you know where to look (they’re with the other unsightly essentials like bottled drinks and napkins, sunk out of sight in smooth, curved central islands reminiscent of giant iPods).
What really struck me about this interview was the weird cognitive dissonance in the tenth paragraph. In this paragraph, Hoyle points out that:
Apple’s App Store is “curated” to the extent that you (and your children) won’t find hate speech or pornography on there.
That is, Apple – for better or worse – prioritises its values over the freedom of its customers to easily use the platform for activities which meet with disapproval from Apple. I wish this (puritanical?) attitude had been used to challenge in this bit of the same paragraph:
Apple has regarded privacy as “a basic human right” for a long time and “built the company around” that belief. The sprawling, intimate personal data profiles that companies like Facebook and Google compile “shouldn’t exist”, Cook thinks.
Cook claims that Apple is built around privacy. Yet, while Apple is happy preventing access to hate speech on the App Store, it actively promotes the Facebook app despite it asking for user permission to build data profiles which Cook says are antithetical to everything Apple stands for.
This seems a really odd moral position to me: if your company is reputedly built around one “basic human right”, why allow apps which violate that fundamental belief and ban apps which contravene less dearly held standards? The answer seems fairly obvious to me: the Facebook and Google apps are among the most popular, and are core to the iPhone experience. But can you really claim something is a cornerstone value if you ignore it to sell more phones?
I was also a bit riled up by this ludicrous comparison:
On cybersecurity … the company also protects its FaceTime and Messages apps with end-to-end encryption unlike, say, Google’s standard Gmail.
Why compare a closed messaging system, where end-to-end encryption is easy, with an open standard like email? That reads like a line supplied by Apple. It should have been challenged by asking if Apple’s iCloud email service protects messages with end-to-end encryption, which of course it does not.
There are a lot of things that Apple does extraordinarily well. It is evidently one of the corporate success stories of our time and has inspired phenomenol brand loyalty among a huge population of users. But it isn’t perfect.
Much of the media, and Hoyle’s article is no exception, seems far too credulous when it comes to Apple. Coverage of Apple would be much more satisfying if it showed a degree of balance or at least an attempt at challenging some of the more outlandish media lines rather than simply repeating them verbatim.
The picture of Tim Cook at the top of this post was uploaded to Flickr by Fabio Bini, and is used here under its Creative Commons licence.
The Economist’s data team has today published a blog post called “How much would you pay to keep using Google?” Unusually for The Economist, the headline isn’t really an accurate representation of the contents, which actually discuss research findings related to the amount people would have to be paid to give up using search engines in general.
But the original question got me thinking. A couple of years ago, I’d have responded with a fairly substantial sum. These days, I’m not so sure. I wonder what that says about the state of the company?
Google used to be the only decent search engine. That is no longer true. A couple of years ago, I decided to see if it was possible to go all-in on Bing. Ironically, this was somewhat inspired by Matt Cutts, formerly of Google, who gives himself 28-day challenges to test assumptions and better himself. Surely, I reasoned, it can’t be as bad as people make out, nor as bad in daily use as an occasional search to try it out makes it seem. I switched my default search engine to Bing in Chrome and on my mobile.
And do you know what? The vast majority of the time, it is perfectly competent. On very rare occasions when I’m struggling to find something I also try searching on Google: I’d say 75% of the time, I also fail to find what I’m looking for there. I’d also say, without any proper data to back up the assertion, that Bing’s results seem less replete with spammy useless links than Google’s. And Bing’s rewards scheme buys me an occasional coffee. I don’t think I’d pay for Google search.
But of course, Google provides more than just search. Would I pay for other components of their offer?
Would I pay for Gmail? There are perfectly decent alternatives to Gmail, and I rarely use my actual Gmail address but forward stuff to it from elsewhere, so redirecting future mail wouldn’t be a problem. Moving the archive would be a pain. I’d probably pay a small fee—a pound a month?—just to avoid the hassle.
One service I would definitely pay for is Google Maps. I use Google Maps every day and have not found anything that can even begin to compete. Back in December, Justin O’Beirne wrote a great essay on Google Maps’s moat—the content and time barrier which keeps it well ahead of competitors. On these terms, I guess Google Maps is probably the most “valuable” bit of Google to me.
Google Drive is great, but OneDrive is pretty great too. Chrome is my current default browser, but I’d happily switch to Firefox. Google Calendar is actually quite irritating (especially since ‘quick add’ was removed) and I use it only because it’s handy. I like my Android phone, but I’d get by on iOS. I’d be sad to lose my Chromebook, but Windows laptops aren’t quite the horror shows they used to be.
I enjoy watching occasional Youtube videos, but I wouldn’t really miss them if I couldn’t watch them any more. I use Google Photos, but I also upload all of my photos to other cloud services, at least in part because I don’t trust Google not to shut down Photos when it turns its attention elsewhere (à la Google Reader or Google Notebook, both of which closed while I was an active user).
More recent Google developments (Home, Assistant, Allo, Duo, Now) have totally passed me by.
Jeff Jarvis used to talk about “livin’ la vida Google” to describe his complete immersion in the Google universe. A couple of years ago, I’d have put myself in a similar category, but no longer. I am only one person, and I’ve no idea how ‘typical’ I am in this context, but I wonder if my change in behaviour represents a wider portentous shift for Google’s fortunes?
The content of this site is copyright protected by a Creative Commons License, with some rights reserved. All trademarks, images and logos remain the property of their respective owners. The accuracy of information on this site is in no way guaranteed. Opinions expressed are solely those of the author. No responsibility can be accepted for any loss or damage caused by reliance on the information provided by this site. Information about cookies and the handling of emails submitted for the 'new posts by email' service can be found in the privacy policy. This site uses affiliate links: if you buy something via a link on this site, I might get a small percentage in commission. Here's hoping.