Access the Design-a-Study Workshop & become an advance reader for my book!
I hosted another Developer Science Office Hour last month, and for some reason, decided I needed to blackmail myself into making it a brief (very brief!) workshop on research design. In general I am a fan of prosocially pressuring myself, as well as running chaotic experiments, and I had posted on my social media that I'd put this thing together if folks shared this newsletter enough to gather 20 signups in a week. Well, several hundred happened in that week, so I knew I had to do it. 😅
There's absolutely no way to get through research design in a single hour, but I wanted to open up a space for people to get validation for a few important themes that I've heard over and over again from software teams:
- Our work is demanding more and more evidence and measurement, but not providing much support to people trying to measure and advocate for human outcomes in their workplaces.
- A lot of conventions about research methodology stay locked away in the ivory tower of academia, and it's hard to bridge the gap between theory and applied projects.
- We often see people make generalizations from research studies that don't match our lived experience, but don't always have the vocabulary to fully describe our concerns. This makes it harder to criticize conclusions based on data that we may not actually agree with.
Was it a lot to host a free open workshop while deep in book edits? Yes. Was it fun anyway? Yes! Toward the value of open access learning, I make it a policy to record all of my Dev Science Office Hours so that they can be shared with anyone who can only participate asynchronously. You'll notice there's a ton of content we didn't get through, so these topics may make an appearance in future Office Hours.
One of the biggest things I took away from this hour for my future work was thinking that spending more time on how we define and decide on measurements for psychological constructs could be useful to teams.
I also think that when you're in a position of authority running a measurement strategy, securing buy-in from the people in your organization about why the constructs you've chosen are the rights ones is integral. Better (for you and them and the organization's actual learning), invite them to help you design what the measurement strategy focuses on in the first place. This is a principle I take from action research, which I think is a community-oriented perspective on the goals of research that might really help folks in technical spaces.
An opinionated take from me on this:

Construct validity has always been incredibly important part of the puzzle of thinking about developer research for me, and I personally think it's also a reason that so many corporate reports about developers feel unsatisfying to people.
Now obviously, sometimes we need to sit back and listen to some expert choices about constructs. Without more domain expertise (or a lot of time to listen to experts), I don't know how to design the measures for a study of liver function. But expert practitioners absolutely have domain expertise that will let them see through bad operationalizations that don't really point to the constructs we say we're pointing to. And if you don't agree with the constructs chosen for a study in the first place, no amount of analysis will convince you that the study was the right one to inform your life.
Here's a table from experimentology.io that I find super useful for anyone trying to design a project to work through, which is adapted from this Flake & Fried (2020) psychological science paper which is also has a great title - Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them.
| Question | Information to Report |
|---|---|
| What is your construct? | Define construct, describe theory and research. |
| What measure did you use to operationalize your construct? | Describe measure and justify operationalization. |
| Did you select your measure from the literature or create it from scratch? | Justify measure selection and review evidence on reliability and validity (or disclose the lack of such evidence). |
| Did you modify your measure during the process? | Describe and justify any modifications; note whether they occurred before or after data collection. |
| How did you quantify your measure? | Describe decisions underlying the calculation of scores on the measure; note whether these were established before or after data collection and whether they are based on standards from previous literature. |
If you watch the Office Hour you'll also notice I also made the choice to not distract us from the conversation with all the interactive exercises that I've done with teams before. Managing time with so many big questions is hard! But if you've got a team that's working together on designing a research project, I highly recommend a few of the exercises in my slides. Definitely let me know if you try any of these, and if you want to see more content like this.
Most of all I want to say thank you to everyone who was brave enough to join and make this wild little workshop idea a real thing in the world. It's always a little scary to jump into a zoom call with a bunch of strangers, but you have made it lovely every time so far.
Here are all of my slides, which you are free to review, re-use, re-mix (I just ask that you give attribution where appropriate and let people know you benefitted from my work). Here's the video:
I mention a few resources that I like to send people about evidence and research at the end of the Office Hour, and here they are:
https://experimentology.io/
https://github.com/rmcelreath/stat_rethinking_2024
https://www.nature.com/articles/s41562-020-00990-w
Now, in further news, I have a December gift for you! 🎁
I am finally able to open up a general call for ADVANCE READERS for my book: THE PSYCHOLOGY OF SOFTWARE TEAMS (coming 2026)!
You can indicate your interest in this form.
I cannot tell you how much it means to me to share this! I am able to select a handful of folks to receive a free advance copy, and I would love to ensure this group represents a broad range of perspectives and experiences. And you are welcome to share widely.
A few important details: this is not an obligation or commitment, just letting me know you're interested; please do not self-select out if you don't have a "platform" or answers for some of the questions, those are optional and along with more senior folks, I am intentionally including junior career folks and diverse perspectives! Please also DO fill this out if you're interested but also think someone else should get a copy "first," you can tell me that :) I've had a few kind folks ask this. The more interest, the more I can invest in sharing the book joy!
Member discussion