Your address will show here +12 34 56 78
Blog

In our previous blog post about working with external teams, we talked about the importance of knowledge transfer when creating, enhancing, and cleaning datasets. Take a look here. This week, we’re talking about communication. When we work on projects with co-workers, we do a great deal of communicating: meetings, calls, stand-ups, check-ins, you know the drill. When work with external teams – from consultants to BPOs to crowds – it’s […]

In our previous blog post about working with external teams, we talked about the importance of knowledge transfer when creating, enhancing, and cleaning datasets. Take a look here. This week, we’re talking about communication.

When we work on projects with co-workers, we do a great deal of communicating: meetings, calls, stand-ups, check-ins, you know the drill. When work with external teams – from consultants to BPOs to crowds – it’s important to remember that though there may be fewer in-house team members, communication is still key.

In practice, this first means establishing good knowledge transfer, as we talked about in our last segment. However, that one-way channel is not enough.

It’s important to create feedback channels from your external teams to in-house teams.

When working with a managed team – whether it’s a group of consultants or an iMerit team – this can be straightforward. Between email, phone calls, Slack, Skype – the list goes on – channels are established and it’s just a matter of making and sticking to a schedule.

When working with an anonymous crowd, however, you need to get creative.

The “team” you’re working with could be ever-shifting, making it hard (but not impossible) to gather unified feedback.

One method to try is adding a “task” in your process that asks for the workers’ feedback. You can gather feedback on the task structure, the task documentation, and see if there is anything you can change to make the task more straightforward for them, and more useful for you.

Encourage honesty, and then you can iterate your tasks based on the feedback you get. Over time, your tasks will be clearer and easier for the crowd to complete, ensuring even more accuracy for you!

What does this look like in reality?

Perhaps you have an online clothing store, and are entering new items into your retail site’s taxonomy. As you go through the data coming from your crowd, you notice that there is an item with markedly low inner-agreement rates. Different crowd members keep placing it in different categories, there is no agreement on where it should belong.

hoodie_blog_imgTake a look at it on the left.

Your crowd is baffled.

Some call it a “hoodie” – it does have a hood, after all – while others are placing it in the “sleeveless top” category. If they were sitting in-house, they could ask you which of these categories is more important to your categorization, or they could suggest placing it in two categories. But, they’re not in your office, so you need to anticipate their thoughts.

To avoid this confusion, design tasks in a way that makes it easy for workers to voice their thoughts.

Going forward, there are many interventions you could take:

  • Add a checkbox workers can tick to mark that they are “unsure of the category”
  • Include a free-text field that workers can fill in with any questions they have about the categorization of each particular item
  • Place a question at the end of the tasks asking your workers if they faced any confusion at an overall level
  • Require a final question where workers can offer suggestions for tasks or instructions

Get creative with the questions you ask your external teams, and remember they’re team members just like those you see in your office everyday!

Stay tuned for more tips on using external teams.

0

Blog

The path to the relevant, clean and complete dataset you need can be a long one, made up of many small, often time-consuming tasks. Maybe it’s tagging hand gestures in a video in order to build an algorithm training dataset. It could be […]

The Foundation to Creating Datasets with External Teams

The path to the relevant, clean and complete dataset you need can be a long one, made up of many small, often time-consuming tasks. Maybe it’s tagging hand gestures in a video in order to build an algorithm training dataset. It could be reading individual user comments to keep your site clean and relevant. Or perhaps it’s conducting complex web research on financial entities.

These tasks take time and focus away from other core tasks, and the option of passing them along to an external team can be quite appealing. However, using external teams – from consultants to crowds – is not straightforward. Communication can be time consuming, and results may not match what you needed. To address these challenges, we pulled together tips we’ve learned along the way of our data journeys.

The first tip? Document, document, document.

No matter how you look at it, your external teams are like new hires. They don’t have the company knowledge or familiarity you do. That means it’s best to do all you can to start them off with a good infusion of knowledge.

To ensure good knowledge hand-off, start with a process document. Chances are this isn’t the first time you have gathered or enhanced the particular dataset in question, so walk through the process as you’ve found it to be working best and document that. Make notes of what teams can expect to see as they create and/or enhance the dataset, and include step-wise instructions as appropriate. Don’t stop there, though! Remember, these are just like new team members. That means…

Adopt the persona of a complete newcomer and revisit your instructions.

Make sure there’s no insider jargon, preconceived notions or assumptions that might derail your external workforce. Remember, nothing is obvious. Double-check your language for clarity, and imagine how it would read to someone entirely unfamiliar with the process and the context.

If you can find common ways to break your instruction design, then you can make it more robust out of the gate.

To find bugs, and weak spots in our instruction design, we have found it incredibly useful to discuss edge-cases and outliers. It’s hard (perhaps impossible) to account for all possible variants of edge-cases, but it’s critical to include even a few. Talk through how your teams – or other external teams – have handled edge-cases and outliers in the past. Do your best to explain the logic and assumptions behind decisions made that perhaps fall outside of the typical cases. This insight into your internal processes and priorities is invaluable to your external teams, and will help them even more than discussion of “typical” cases.

For one ecommerce client, we were asked to develop a set of tasks that would help them spot marketplace listings of counterfeit items. Though some items were quite obviously counterfeit, not all were as easily identifiable.

pear_smartphone

The less-well-known Pear brand smartphone

In addition to the clearer cases, we were able to identify some trends that marked the more difficult edge cases of counterfeit products. These included things like suspiciously low prices, or account names that seemed to suggest something suspicious was afoot (names like **CHEEP**REPROS** might be a give-away). By incorporating these special cases into documentation, we were able to ensure quicker identification of tough-to-spot products.

tip for process documentation

DOWNLOAD TIP SHEET

Keep this tip sheet handy for next time you need to document your data process, and stay tuned for more tips on using external teams.

0