Learning that compromise doesn’t equal failure

I was on holiday in Africa this week, leaving product behind and heading for the mountains and the sun.

Whenever I’m away I tend to keep a sneaky eye on anything involving developers, because issues that arise there are usually the most complicated to fix, and I’m the one who knows what shade of yellow all the balls in the air are.

Mostly I decide things can wait, but occasionally I’ll step in if I see conversations heading towards a consequence that hasn’t been accounted for.

When that happened during my holiday this week, it made me wonder whether imperfection is inherent to product management and whether that might not just be a reassuring fact, but also a positive one that leads to better overall results.

Playing out on Podio this week I could see my team trying to fix something in a way I knew would break something else. They’d done all the right things; figured out the problem, consulted with a developer, kept people informed and agreed a fix.

When I jumped in and got them to revert their decision, it made me stop to notice that they were dealing with a problem that, in the timeframe, didn’t have a correct answer – there were two paths and the one I pushed them to take was still wrong, just less wrong than its alternative.

These are the types of decisions I make all the time, but on each occasion a tiny bit of me feels like a failure; I didn’t find the third way, I couldn’t get everybody to win before the clock ran out and I had to settle for something less than what I wanted.

Watching a great team of capable people going through the same process this week made me not only resolve to cut myself a little more slack in this area, but also gave me the opportunity to reflect on the benefits of that rock and hard place situation. There will be a third way for next time and getting back to work on Monday will allow me to find it, but will also give me something I wouldn’t have had if the issue had been easier to solve.

These trickier problems and craggier paths to success lead to the pondering, consulting and late night googling that make us good, better and then the best at our jobs. They’ve led us to find the third way more often than we’ve ever stopped to notice. The development is gradual, but the only way to achieve it is to keep clashing with the issues that don’t have an obvious answer and use what we learn to keep moving forward.

Advertisements

Five minutes with a user

General Assembly at the moment are running a series of free introductory seminars in various specialist areas. They’ve been great, I’d definitely recommend signing up to a couple if you have the time.

Last week I went to one on user experience design and I thought I’d share a highlight of the session.

Usually when a speaker says “I’d like you to get into pairs” I get an internal sense of dismay. After all, if I relished opportunities to make awkward chat with humans I didn’t know, I wouldn’t be working in digital.

However, what I have realised over the last five years is how much I love user research, and fortunately that was the task we were set in this session.

We had five minutes to imagine we were working on improving the Citymapper app and run user testing with the person next to us about their experience of using it, in order to make informed recommendations about what could be improved.

The most interesting challenge with my partner, I’ll call him Adam, was that he was very distrustful of apps generally, particularly apps that asked for access to his location. He felt ill-informed about how his data might be used, and uncertain about the location-driven apps he downloaded having access to other areas of his phone.

Adam had never used a travel app, when I asked him about the methods he preferred for navigating London, he said he would use the TFL website. This started as a productive line of enquiry as we went through the way he used the site, but it soon became clear that he didn’t have very high expectations of TFL, it was simply ‘fine.’

Fine was enough for Adam when it came to maintaining his digital privacy. When I asked him how he would get around outside of London, he said he would ask for directions or take a printed map of his journey – that was how strongly he felt about protecting his identity online.

Thinking about the app design brief, I wanted to get to more than ‘fine’ with Adam, I wanted to know what would make a digital experience great for him. So I started asking him about his favourite non location based app.

It took him no time to get out his phone and show me a radio product he loved, and the most interesting thing about this was how much of what he said could be applied to the development of the Citymapper app.

On the radio app’s home screen, there were six content categories – things like ‘popular now’ ‘stations’ and ‘favourites.’ Though the ‘favourites’ option was in the bottom left corner, not the usual position of popular actions, Adam went straight to it and started showing me around. This indicated how important personalisation was to him, something really useful to explore with users in the context of the Citymapper app.

I then asked him to show me how I might use it as a first-time user, to get an idea of the journey he might have been on when he first signed up, and what he liked about that.

He showed me the search function and commented on how good the app was at returning exactly what he was looking for really quickly – another useful insight for apps more generally. The interesting thing about that search function was how it remembered his latest query and then made suggestions on what his next one might be.

He said he found this feature surprisingly accurate and  that it had introduced him to new stations he wouldn’t previously have come across. It was clear from this that he had begun to trust the app and let it guide his behaviour and propensity to try new things – another great example of something that could be explored more generally in the development of other platforms.

The whole task was a great reminder that good digital experiences come from putting the person before the product. To learn these things in five minutes made me really excited about some user testing I’m running for Breast Cancer Care next week. If you haven’t done any for a while, I hope this can be some inspiration!

Adopting a ‘start where you are’ approach to digital

A few months ago, I went to one of the really great (and free!) digital seminars run by Precedent Communications.

The main takeaway for digital progress in an organisation was to “start where you are”, which was interesting for me as one of the things I was working on at the time was creating a tool for the site which would allow all supporters to share examples of having challenged mental health stigma in their community, school, workplace or daily life.

As the needs of each group were so similar, I was keen to find a way to unite them into one resource that would work for everyone both internally and externally, without any UX compromises that would undermine the value and potential of the tool.

At the same time, I was aware of the need to view the tool’s creation in the context of other interactive areas of the site, to ensure we weren’t reinventing the wheel and dividing supporter attention between similar actions.

In ascertaining requirements, both internally and among key supporter groups who had initially expressed a need for the tool, I identified a significant functionality overlap between this project and something we had previously produced to support our national campaign drive in 2015. The campaigns team had been keen to find a way to ensure this bit of the site remained relevant beyond the campaign, so to my mind, this was a perfect opportunity to test out the “start where you are” philosophy.

Adopting this approach saved us a lot of time and money; we were able to get the tool built easily within our usual monthly sprint turnaround, and for just over £1,000. Most importantly, we were deploying to live having already worked out any UX kinks from the previous iteration, giving us a valuable product from the outset that supporters immediately began to take advantage of.

I’ll definitely be incorporating this into my thinking again!

Getting started with Accelerated Mobile Pages

With everyone on holiday over the summer, we’ve had some time at Time to Change to do a few nice UX improvements that have been on the list for a while. My favourite has been getting the site AMP ready on blogs and news stories.

Working in comms, news sites amping their articles has been really useful for me. When stories break I want to know the details fast, and as a commuter, I’m often on a train with terrible signal when those ‘need to know’ moments happen. Getting quick access from my phone has taken a lot of the frustration out of browsing for information, and gradually I started to think how great it would be if everyone got on board with amp – including us.

Deciding to get started

In April I went to Brighton SEO and heard a talk from Dom Woodman about getting started using amp with Drupal, WordPress and Joomla. The talk was great and gave me the confidence to know we could do it, as well as being a good opportunity to ask questions about wider uptake beyond Google’s ‘top stories’ carousel.

Dom’s advice was to get going and sure enough, a few weeks after we completed the sprint, Google blogged an early preview of their plans to expand amp.

Other benefits along the way

As the Guardian worked in partnership with Google in order to get amp working on their top stories, I picked them as a model for how I wanted it to work on our site. One thing I noticed most prominently was the consistent, engaging use of images in every post. At first I wondered how they were able to incorporate images without compromising the integrity of a high speed article, then I noticed the <amp-img> tag and other similar solutions, which we would also use as part of our own installation.

Previously, feature images weren’t something we’d used very often on the Time to Change site, so this has been a real bonus in getting amp ready and is something we’re now thinking about expanding to other content types across the site.

Any compromises

Installation and testing was fairly standard and took the usual month we allow to get new things done. One compromise we did have using Drupal was that amp is only compatible with page content types, which means our comments module doesn’t pull through to amp articles.

If you’ve recently installed amp yourself, I’d be interested to know how this compares to other CMS’ and I’ll be looking out for how Drupal 8 develops as amp popularity grows.

More on amp from Moz >

Managing support time with external developers

In the last few months, I noticed our support bills with external developers had slowly been creeping up, so I created a workflow document to manage the way we’re using helpdesks internally.

Although it’s a very low-key, totally internal workflow chart, one developer mentioned that quite of few of his clients ask for similar guidance, and asked whether he could share it more widely.

I hadn’t really considered it might be something useful beyond our team, but he made me realise this is probably quite a common problem, so here’s a format-free copy in case it’s useful for you too:

What’s the issue?

A new requirement: 

  1. Have the team given you a full and robust description of what they want?
  2. Would what they want provide significant value for Time to Change?
  3. If yes, give full brief to Becca
  4. Becca puts in development sprint
  5. Becca manages sprint, UAT and deployment

A problem with something that used to work:

  1. Have the team given you a full and robust description of the problem?
  2. Is it broken for everyone, are they using IE, remote desktop, or another likely cause?
  3. Can you easily / routinely resolve it yourself without any unknown implications?
  4. If difficult to resolve alone, have you discussed with the rest of the digital team?
  5. Before escalating to a developer, is it critically urgent, or if not, has it been consistently broken for two days?
  6. Have you described the full problem and expectations for resolution in one simple Podio message?
  7. Once actioned by the developer, does their solution match or better what you asked them for?

If you’ve had similar problems, l’d also be really interested to know how you’ve solved them!

Using social listening to find and amplify user-generated content

As Time to Change is a social movement, sharing personal stories from people with experience of mental health problems is an essential part of our content strategy.

A lot of these stories we commission from the general public and host on the Time to Change blog, but we’re also a big fan of amplifying user-generated content, to strengthen the voices of people who will ensure the movement  can continue long after our funding runs out.

As we’re such a big campaign, a lot of user-generated content comes directly to us without much effort on our part. People write blogs or produce video content and tell us about it, knowing we’re always looking for great things to share more widely.

But what this doesn’t allow us to see is all the amazing people out there who are fighting for the same things, but doing it alone. Harnessing these voices really drives the campaign forwards because, as well as being stronger and louder together, every person who produces content for themselves has the potential to inspire someone else to do the same – building the foundations of a sustainable legacy.

In order to reach these people, we needed a social listening tool that would show us all the content being produced around the world that related to mental health and stigma. Google is a great tool for finding the top ranking HuffPo and Buzzfeed articles, but those need no amplification, what we wanted was to find everyday people who had powerful personal experiences, wanted to change the world and would inspire others to do the same.

We looked at four platforms that would do this, along with the other operational requirements we had. They were:

Having interviewed them all, we landed on two finalists. The first we picked didn’t work out, they had a few development issues they couldn’t resolve and I didn’t want to keep paying for something that didn’t work properly, so we switched to Meltwater, who have been fantastic.

We’ve been with them a few months now and they do everything we were looking for to help make our social media as good as it can be. We aim for every post to get 600 engagements and 60,000 reach, but invariably our user-generated content found through Meltwater vastly outperforms this, growing our online community and spreading our message of change to a wider audience. This month our top user-generated post gained 8,700 likes and 1.3 million reach – all made possible with a couple of clicks in a pre-built search.

If you’re thinking about using social listening more, for this or the many other reasons, let me know and I can share my notes on suppliers we spoke to.

In the meantime, here’s a lovely post from someone whose blog we shared this month!

Brooke on mental health stigma

Removing ROT web content

This blog is about removing redundant, out-of-date and trivial (ROT) web content from a large site.

Before I start, I should say that a much more sensible person would have got an agency to do this. At several points during the process (which I started in October) I’ve thought I was being far too stubbornly INTJ about the whole thing and it would be easier to hand it over to a bigger team of people who could work on it full-time. But it was interesting and technically achievable – I’ve come to realise I can’t say no to anything that can be described like that.

What were the issues?

During my first year at Time to Change I’d slowly been discovering a lot of ROT content. A lot of it was unnavigable, which I think was part of the problem – whoever created it had left, forgotten it was there or for whatever reason abandoned it to float around the website without its parents. It happens in any organisation, perhaps particularly in busy charities where everyone’s working at such breakneck speed that the phrase “just get it up there and we’ll deal with it later” can become common by necessity.

The tricky thing is that nobody does deal with it later, because we’re all straight on with the next cripplingly urgent thing. And so it continues until there are are over 5,000 out of date pages that no-one but Google has time to notice.

A crucial part of this was that we didn’t have a system for forcing parent page assignation, so bypassing this step and saving everything as a page in its own soon-to-be-forgotten-about right became inevitably common practice overtime.

A further contributor to the problem was that we had no processes in place to deal with content that had a known shelf-life, so community event listings from 2011 continued to sit there gathering dust in the absence of an agreed way of unpublishing and redirecting them.

How did we address these?

As well as dealing with the content that needed to be removed, I’ve been keen to make sure we improved our set-up to negate the need for such a time-consuming audit in the future. With a bit of development, now when people add content to the site they’re asked to assign a parent by default, and an auto-generated url now inherits that breadcrumb trail as standard.

To deal with the event listings, I’d hoped for a module that would manage these automatically based on a set expiry date, but the slightly more laborious alternative of manually setting an auto-unpublish date when approving the listing and then using Siteimprove to pick up the 404 for redirecting is a fine substitute.

Incidentally, Siteimprove has also been great for us in a number of other ways, and I found you can haggle them down fairly substantially from their opening service offer, I’d definitely recommend them.

How did we ascertain the ROT content?

Although we can export all pages on the site from our CMS, I wanted to make sure we knew which were the high performers so we were making removal decisions within an informed SEO context. With that in mind, I picked the slow route of exporting them 5,000 rows at a time, in rank order, from Google Analytics. 

Once I had a master spreadsheet of 17,000 pages, I looked through the first hundred or so to identify any top performers that might also fit the removal bill. Happily there weren’t any I could see that ticked both boxes – top performers were, as you’d hope, well used and positioned pages, or personal stories and news articles that remain indefinitely evergreen.

With that reassurance locked down, I could sort the spreadsheet alphabetically as a way of identifying groups of similar pages – e.g. blogs, news stories, user profiles and database records which we wouldn’t want to remove. It also identified duplicate urls and other standard traffic splitting mistakes. I then selected and extracted these from the spreadsheet, cutting the master down to around 10,000 pages.

Next I wanted to filter out the dead links, because we now had Siteimprove to pick these up and a weekly digital team process of redirecting highlighted 404s crawled by the software, so they didn’t need to be included in the audit.

As GA exports urls un-hyperlinked and minus the domain, I needed to add these in. I used the =concatenate formula to apply the domain and the =hyperlink formula to get them ready to be tested. I then downloaded a free PowerUps trial and ran the =pwrisbrokenurl dead link test for a couple of days over the Christmas holidays. It’s worth saying my standard 8gb laptop struggled a bit with this, so if you have something with better performance, definitely use that.

PowerUps divided my data into broken and live links so I could filter out the broken ones and be left with an updated master spreadsheet of every page that needed auditing by the team. There were just over 6,000, which we divided between us and checked on and off over several weeks, marking them as either ‘keep’ or ‘delete’ and fixing the url structures and parent assignation as we went.

That process identified 1,000 relevant and valuable pages to keep, and 5,000 redundant, out-of-date and trivial ones to remove. Past event listings make up a large proportion of these, but I’d also say you’d be surprised how many other strange things you find when you do something like this!

What’s next?

Now we know what we’re removing, I’m going to get a temp to unpublish and redirect it all, which I hope will take about a week. From there I’m going to look into how we might go about permanently deleting some of the unpublished content, as a spring gift to our long-suffering server.

Once that’s done we can move onto the content we’re keeping, so the next phase of the audit will be about ensuring everything left behind is as fit for purpose as we can make it – I expect this to go on until the end of the quarter.

That time should start to give us an indication of Google’s response to the removal of so many pages. I’m a bit nervous about this and prepared for an initial dip in traffic, but by Google’s own standards I’m hoping for a curve that looks a bit like this:

trafficgraph

I’ll keep you posted!

Initial thoughts on Facebook’s suicide prevention tool

Over a year ago now, I blogged about Samaritans Radar, a tool created by the charity that caused a lot of controversy amongst the strong mental health community on Twitter and elsewhere.

I agreed with a lot of the issues people raised, though for me privacy wasn’t top of the agenda, my worry was more around the obligation it placed on individuals to take responsibility for the safety and welfare of others – something our Government should be doing and, increasingly, isn’t.

Since then, Samaritans have been working on an alternative, something I was vaguely part of in the early stages, attending a couple of round-table feedback meetings and sharing my own experience of managing these issues on social media, in an interview format with the Samaritans digital team.

This week, Facebook have launched their suicide prevention tool in the UK, which takes elements of what the Samaritans wanted to achieve with Radar but looks, at face value, to be safer and better.

Three great things:

1. Anonymity for the person flagging that content to Facebook, which means they can look out for their friends without feeling alone and overwhelmed with the responsibility for keeping someone safe

facebooksuicidepreventiontool

2. A practical and empathetic support journey for the person who’s feeling suicidal, including a prompt the next time they login

facebooksuicidepreventiontool2

3. Support options which connect them (anonymously and by choice) to the Samaritans, who are trained and able to provide the right help. They also get the option to connect with a friend – something that might feel more possible once you know for certain that at least one person in your network is worried about you.

In general I think there is a place for providing interventionist support within social media platforms, but I do have some reservations about the term ‘prevention’ and its potential to reduce the pressure on other agencies to create a society where 6000 people a year don’t attempt and succeed in taking their own lives.

I’m also wary of the passivity of asking for help in such a vulnerable and indirect way – if people need help they should believe in our systems enough to know they’ll get it, but since so much evidence into crisis services suggests that they won’t, people are left posting on Facebook and hoping somebody cares enough to respond.

Image credit: Felicity Morse, BBC Newsbeat.

Upscaling server capacity – part 2

Last month I blogged about how Time to Change would manage the exponential increase in server load caused by our annual marking #timetotalk day.

I talked about having six plausible options for managing the load and about the decision to choose a combination of Varnish and an additional overflow server to cope with the day’s traffic.

Setting these up was relatively painless but did involve a last minute need to migrate the whole site to a new primary server, because Varnish, for want of a better explanation, didn’t like the old one. Once that was done we were basically all set, we just had to make sure it excluded the dynamic tools we needed to run in realtime.

What was load like on the day?

Traffic levels were pretty much in line with what we expected, an increase on last year and way more than our standard infrastructure could have handled – in the end just under 50,000 sessions in 24 hours. Social media was a big driver to the site, with just under 60,000 tweets on the hashtag throughout the day – trending until about 5pm, and then again in the evening.

How did Varnish perform?

I can’t recommend Varnish enough – we set it up to manage load balancing and it handled just over two million requests through the day and barely broke a sweat, we used only 10% of our primary server’s memory, going nowhere near the overflow at all, and page load speed was a satisfying 3.2 seconds – it was perfect.

How was everything else?

Ironically after all that, our database ran into a bit of trouble around lunchtime, processing so many form requests that it crashed for around 10 minutes. But that was minor and we’ll upgrade it and increase capacity for next year – watch this space and, in the meantime, get installing Varnish!

Upscaling server capacity – part 1

At Time to Change we generally get around 1000 visitors to the site a day. On peak days, like World Mental Health Day, this might double, but there’s enough flex in our server capacity to manage that without issue. Once a year though, the first Thursday in February, we run Time to Talk Day – a massive single day campaign to get the whole country talking a mental health. We’re lucky, it’s very successful, bigger and better every year we do it – just last week I was in a meeting with someone from an agency wanting us to pay to trend that day, we were all quick to point out we organically trend all day anyway, without paying a penny. Very, very lucky.

The downside though, if we can call it that, is that we’ve now reached a point where our site’s infrastructure is not equipped to deal with the spike in traffic, which last year reached 32,000 visitors and tripled the time it took pages to load for supporters wanting to take part in the day. It never went down, which was a relief, but it was painfully slow, despite our most ambitious estimates and load tests in the preceding months.

So it’s my job this year to make sure we’re ready, not ready for a sensible growth on last year, but really ready for the kind of numbers we feel arrogant even talking about – just in case. In previous jobs I’ve just called the hosting company to increase the server capacity, maybe even kick a few smaller charities off to give us maximum breathing room. But the issue at Time to Change is we’re already on the biggest physical server, at its highest capacity, so the standard option isn’t an option for us. Ruling in and out the various alternatives has been pretty stressful as they all carry risks and, as ever, it’s a ridiculous time of year with millions of other urgent things happening at the same time.

So what are the options?

  1. Caching – installing this (I won’t say which for site security reasons) actually made more of a difference than I thought it would, increasing performance by 20%, but we need more than that to get through Time to Talk Day
  2. Move to a virtual server – this does solve the immediate issue as the capacity is exponentially better, but then we’re left with a problem of poor performance due to underload the rest of the year, so it’s not a good long-term solution
  3. Temporary, reversible migration to a virtual server – this is a possibility but a very risky one as you never really know how your site’s going to perform in a new environment until it’s had some time to bed in and be tested to its limits in a live setting – none of these we really have time for
  4. A microsite – if I was a web developer I’d probably go for this, move the entire problem into an isolated container that guarantees the stability of the main site? Sounds perfect. Unfortunately I work in comms and microsites are a brand and UX sacrifice I can’t live with, so we’re not doing that
  5. Varnish – the Iron Man of caching systems it turns all your dynamic content static (except the bits we need on the day) and improves performance by about 50%
  6. Match our current server with another and load balance on the day – like an overflow unit for when the traffic hits its peak

I’m going for 5 and 6, we’ve got just over a month to get it right and in the meantime I’m site auditing our 10,000 pages to make sure we’re only working as hard as we need to. In part 2 I’ll let you know how it went!

PS – another way to find out is to take part in the day, 4 Feb 2016, let’s get the nation talking about mental health.