There are two things I know for sure: (1) researchers vary in style and personality, and (2) this variance allows them to fall into a few different categories, especially when it comes to evaluating their own research execution performance. Truth is, some will always be unsatisfied (partially due to their desire for perfection), no matter how well the research study actually went, while others will think every study they have chosen has had the best selection of research methods chosen. Of course there are those who often, simultaneously, ping-pong back and forth between these two extremes with questions such as, “Did my study really go as well as I thought it did?” or, “Did my users/customers truly feel as though their experience/feelings/needs etc. were the focus?”
Undoubtedly, at one point or another, you will have one of those moments where it’s obvious a particular method chosen wasn’t the “best”, or the approach taken in executing the method(s) simply did not go well. When this happens, a researcher’s reaction is crucial to success of future studies, as well as having the ability to successfully apply lessons learned in effort to grow oneself as a research practitioner. Here is a 5-step guide to keep in mind to help recover from poor research execution.
1. Don’t make excuses
It’s easy to write off a poorly designed and/or executed study by playing the blame game – blaming other people, or pointing the finger towards the “lack of” model (i.e. lack of resources, lack of time, lack of clarity/understanding, etc.) – anything to take the blame off yourself. While this might seem like the best approach at the moment, it’s simply masking the reality of you not willing to accept responsibility for what happened. Short term blame makes it difficult to avoid making the same mistakes in the future. Whichever approach taken – quantitative, qualitative, hybrid, stand-alone or continuum effort type of study – the fact remains, in this digital world of design, data is king! If a study is poorly designed or executed, and poor data is received as a result, the first step is unequivocally to accept responsibility for it and provide recommendations for how you (and/or your team) will improve your collection of research in the future. This builds trust with your users, your colleagues, your management, and ultimately yourself. Accepting responsibility is often a trait of a good leader and having the ability to facilitate and lead research studies is a trait of a good researcher.
2. Figure out what went wrong
Ultimately, while solely laying blame is unproductive, figuring out the root cause of what went wrong with a research study and why, helps the researcher avoid more poorly designed/executed studies from occurring in the future. The difference between laying blame and problem solving is the latter involves taking responsibility for what went wrong even if the issue seems like it was beyond your control. As the research lead of a study, it can often be challenging to pinpoint what exactly went wrong, given you likely have/had a different perspective on the study than everyone else (i.e. users). For this reason, it’s often helpful to go over the goals of the study, along with the rationale for the method(s) chosen within the study with colleagues you trust to tell you the truth, and/or your supervisor, so any opinions and/or concerns are taken into account and addressed sooner rather than later.
3. Change what you can
If your research study went poorly because you and/or your team were simply “off your game”, this can be fixed by reviewing the design of the study with people outside of your team/core user pool and conducting a round or two of prep studies with these “mock” users to fix/knock out any kinks before the launch of the official study. On the other hand, if things went off the rails because of factors beyond your control, then it’s important to take some time to identify what you can change in the future to avoid running into the same issues again.
For example, let’s say you chose to conduct an unmoderated usability test, given you didn’t have the team bandwidth to schedule moderated, one-on-one sessions with all users. However, at the end of the study, you realized there were some internal technical issues which hindered some users from successfully completing their session. In the future, you could work on designing a hybrid study where you first survey/determine which users are/are not equipped for unmoderated test studies and then use that data to help build a flex schedule for moderated testing sessions. The key is having clear research goals and collecting useful, quality data. It helps ensure your research study and the methods you’ve chosen are appropriate and well-thought out.
4. Identify what you can’t change
With the best of upfront preparations, there will sometimes be unavoidable problems you simply can’t do anything about. You can get better at dealing with unexpected issues like technical difficulties or users who express not having the time to fully engage with/in the study as much as you may desire/need them to, but sometimes you’ll have one of those days when everything just seems to go poorly for no reason at all. In these cases, the problem-solving approach may not work so well. However, rather than dwelling on things you can’t control, channel your energy into something you can control, like preparing for your next study and writing a list of suggestions as to how you can modify your approaches to make them leaner and more adaptable.
5. Chalk it up to experience
As naive as it may sound, sometimes the best way to deal with a poorly executed research study experience is to put a positive spin on it. Friedrich Nietzsche said, “That which does not kill us, makes us stronger,” and researchers from the beginning of time have proven him right. The trick is to never dwell solely on the WHAT, but rather on the WHAT NOW – what will I do now that I have learned from this experience, what will my team and I do better next time now that this study is over, what growth opportunities will I now get to leverage, etc. Well, you get the point. After all, every bad experience really is an opportunity to build character!
It’s also important to realize the reality of running any type of research study is that achieving 100% perfection is a fallacy. And that’s 100% ok. Most of my best research studies came off the heels of a previous one, where I felt a small adjustment here/there may have made a difference. We’ve all experienced at least 1 poorly designed and/or poorly executed study for one reason or the other – and given we each likely define “poorly” differently – we will all probably have more in the future. Understand and use those experiences as bricks to help you to form/reform your future studies and make them stronger, more impactful and even more useful to your team, your users and yourself. You can almost parallel it to bombing a musical performance/set – trust me you’ll get over it – and in the end, be a much better performer + research practitioner as a result.