You're barely conscious of the dozens of searches you execute everyday. From online shopping to finding a vacation home, each search application has unique customer expectations for relevance. When it's done well, you'll hardly notice. But when the results are off the mark, you'll give up. Why should your users be any different?
The real secret to search is simple. It requires arming developers with the knowledge of those most familiar with the users - marketing, content curators, and domain experts. Unfortunately most conversations go like this:
Poor interaction between developers and content experts causes search quality to slide backwards. Content experts have little insight into how or why search behaves. Developers, on the other hand, lack understanding of what search should do. Despite their search expertise, they rarely have the skills to know what "relevant" means for your application. Only the user experts carry this knowledge, making every change a developer makes fraught with danger, likely to destroy the relevance of your search results.
Making holistic progress on search requires deep, cross-functional collaboration. Shooting emails or tracking search requirements in spreadsheets won't cut it.
Search changes are cross-cutting: most changes will cause problems. Testing is difficult: you can't run hundreds of searches after every relevance change.
Moving forward seems impossible. To avoid sliding backwards, progress is slow. Many simply give up on search, depriving users of the means to find critical information.
Developers need content curators to define how individual searches should work. With Test-Driven Relevancy, graded search results help you automate search testing.
Exposing the reason behind search engine rankings reveals the necessary action. This brings developers & content curators to a common understanding.
Test-Driven Relevancy means tracking hundreds or even thousands of searches, highlighting the relevancy issues, and evaluating the effect of changes over time.