Listen
Podcast: Play in new window | Download
About
Andrew Branch joined Measures for Justice in 2015. As Director of Product Engineering, Andrew oversees MFJ’s engineering effort to collect and manage criminal justice data and the product line to bring it to the public. Andrew brings his 30 years’ experience and passion for software development and team building to the position. He has designed and delivered numerous business and consumer-oriented products over that time.
Andrew has a BS in Computer Science from Siena College and an MS in Computer Science from Rochester Institute of Technology.
Recommended Reading
Ordinary Justice: How America Holds Court, by Amy Bach.
Good To Great: Why Some Companies Make the Leap and Others Don’t, by Jim Collins.
In this episode of the Product Momentum Podcast, Sean and Paul welcome Andrew Branch, Director of Product Engineering at Measures for Justice (MFJ). MFJ, an ITX client and Rochester, NY neighbor, is a criminal justice research organization whose mission is to make accurate criminal justice data available and accessible to all – and to leverage this same data to spur societal reform.
These data are jarring. As Andrew reports –
- As many Americans have a college diploma as have a criminal record – a statistic that mostly impacts people of color.
- One in three black men born in 2001 will likely be imprisoned at some point in their lifetime. For Latinos, the number is 1 in 6. For white males, it’s 1 in 17.
- The more than 3,000 counties in the U.S. adhere to their own variation of a criminal justice system – a vast, complex system that includes law enforcement, prosecutors and defense counsel, courts and jails, and so on. On top of that, these same jurisdictions craft their own policies and use their own data systems to track it all.
These data demand answers to many questions, not least of which is how are we to make informed decisions about things we can’t isolate, measure, and compare? Thankfully, Andrew Branch and our friends at Measures for Justice are committed to building solutions that leverage technology to deliver vital societal change.
“At MFJ, we collect countywide criminal case data, from arrest to post-conviction,” Andrew says. “We then clean it up, normalize it, and package it into performance measures that provide a comprehensive picture of how cases are being handled across the entire criminal justice system. We then make it available to the public on our free data portal.”
Interviewing clients is a treat for us. So be sure to tune in. The lessons here are as vital to product people as they are to those of us imagining a world in which social justice reigns.
Podcast: Play in new window | Download
Sean [00:00:18] Hi, welcome to the Product Momentum Podcast, a podcast about how to use technology to solve challenging technology problems for your organization.
Paul [00:00:28] Hey, Sean, how are you doing today?
Sean [00:00:30] I’m doing awesome, Paul. Looking forward to this one.
Paul [00:00:32] Yeah. Full disclosure to listeners, we are talking to a client, a rare treat for us. Measures for Justice is doing great things in the world.
Sean [00:00:40] Yeah, they’ve got huge goals that they know they’ll never achieve fully. But the cause, first of all, it’s timely with all the things going on in our society. But it’s a really good cause and really interesting products. And we get to go deep into this domain and talk with a real product leader in the civic space.
Paul [00:00:57] Absolutely. We get into ethics, we get into bias. We get into a little bit of building a better data set. And if you’re listening to this and the cause resonates, feel free to go to measuresforjustice.org/donate and chip in what you can. They’re doing great work.
Sean [00:01:14] And read Amy’s book, which we’ll also put the link into the podcast transcript here.
Paul [00:01:19] All right. Let’s get after it.
Sean [00:01:20] Let’s get after it.
Paul [00:01:24] Hello, everyone, and welcome to the podcast. Today, we are excited to be joined by Andrew Branch. Andrew joined Measures for Justice in 2015. As director of Product, Andrew oversees MFJ’s engineering effort to collect and manage criminal justice data and the product line to bring it to the public. Andrew brings his 30 years of experience and passion for software development and team building to the position, and he’s designed and delivered numerous business and consumer-oriented products over that time. Andrew, we’re super excited to have you. Thanks for joining us.
Andrew [00:01:54] Hey, guys, thanks for having me. It’s really an honor to be invited on this podcast. I know that it’s a great podcast. You guys had some real luminaries on here like Stephen Covey and Jared Spool, so it’s a little intimidating, but I hope that I can provide some valuable information.
Paul [00:02:08] Absolutely. So just to kick things off, I’m curious if you could share just a bit about what Measures for Justice is, for those who haven’t heard of it before, and then, you know, we can go from there.
Andrew [00:02:18] Measures for Justice is a criminal justice research organization. We’re located here in our fair city of Rochester, New York, and what we do is we collect data from counties and we collect data from arrest to post-conviction. We take that data and we clean it up and we package it into performance measures. The idea is that what we want to do is provide a comprehensive picture of how cases are being handled across the entire criminal justice system. We take all that data and we package it up and we put it back on our free data portal and provide it back to the public.
Andrew [00:02:51] Your listeners, I’m sure, are well informed of some of the problems in the criminal justice system, but, you know, there are lots of problems. So the US incarcerates a huge number of people, more people than any country in the world. We have five percent of the world’s population, but we have 25 percent of the world’s prisoners. As many Americans have a college diploma as have a criminal record in this country and it mostly impacts people of color. So one in three black men born in 2001 will likely be imprisoned sometime in their lifetime, one in six Latino men and one in 17 white men. And one thing that I learned at Measures for Justice is really that most of what happens in criminal justice is carried out at the county level and typically it’s the county’s largest expenditure. So we live in Monroe County, New York, and it’s over half of our budget. And that includes like the D.A. and the sheriff and public defenders and public safety. And so it really takes a huge percentage of our resources that we’re putting into criminal justice.
Andrew [00:03:54] And, you know, the organization was founded by Amy Bach. She wrote a book back in 2010 called Ordinary Justice: How America Holds Court. And she spent years going to lots of small little courthouses around the country and interviewing a lot of people in Georgia and Mississippi and Chicago. And what she found is really like kind of an assembly line approach to justice, where public defenders would plead out, you know, all of their defendants with really sometimes having very little knowledge of their circumstances, where defendants didn’t necessarily know what they were pleading to when they showed up in front of the judge, where, you know, outrageous bails were being set by judges for negligible crimes and in some cases where defendants were spending a long time in jail simply because they couldn’t pay small amounts of bail.
Andrew [00:04:43] So Amy’s book concluded with the idea of, you know, that we should be able to create some sort of system of measures that can look at the system objectively. Amy’s book was really well received. Actually, she had no plans of creating a nonprofit. She was going to move on to her next book, but she received seed funding from an anonymous donor who wanted her to start an organization to kind of look at this problem. What’s interesting about the criminal justice system in this country is that it’s not like one system. It’s really thousands of systems. There’s over 3,000 counties in the US and each one has their own law enforcement and prosecutors and public defenders and courts and jails and each one of those has their own policies, they have their own data systems on how to track this. They have their own… and the question is, like, how can you make informed decisions about what’s going on in your criminal justice system when you can’t really take a kind of a comprehensive look at it and you can’t measure and see how past decisions have impacted the system?
Paul [00:05:40] Yeah, so that’s the great question. How do you manage what you can’t measure? And Amy Bach’s book, I’ve read it cover to cover, I would recommend it to anyone, but if you do pick it up, I think you need to read the whole thing because if you stop in the middle, to be perfectly honest, it’s quite depressing. But there’s hope at the end. It’s easy to listen to the ideas that we were just talking about and become cynical. But I think that the core of Measures for Justice is about envisioning a better way and doing that through technology. So I want to jump into that topic. So you’ve been in product, in software and design, for your career. How do you observe the rigor and methodology of volunteer, not-for-profit products intended for civic good differently than commercial or individual or enterprise products?
Andrew [00:06:26] Yeah, so we’re civic tech organization, right, and the goal of civic tech is really to kind of fill the gap between government and the public and to help inform the public and help to push civic outcomes. And so I think what is really different for us and for being a nonprofit and being in this space is it’s not necessarily the technology or the internal processes we use to solve problems, but it’s really kind of the whole ecosystem that is sitting around our organization, right. So, you know, one thing is that we’re largely funded by large foundations. So they give us funding because they believe in our mission and they’re looking for some sort of return on the investment that they put in us. You know, it’s a little harder to quantify what success looks like and what our outcomes are when, you know, we’re giving data back to the public and to advocates. And it’s hard to measure the impact that we’re making. And I think that’s one of the nice things about for-profit. I mean, dollars earned is a really great metric, right, and it’s something everybody can agree on. And that’s a lot more complicated.
Andrew [00:07:38] You know, the other aspect that is different is that our employees are really engaged. And I think it helps us in a lot of ways and recruits great talent. And we’ve really recruited some really great people because they’re really excited about our mission. They come to us because they want to work specifically on something that they feel good about every day. And I feel the same way. And I think that’s helped us to really build a great team and maintain that team. But you got to have the tech chops as well. And we even find that there’s people that want to volunteer their time. And that is a bit of a challenge. We’ve had a hard time figuring out how we can do that effectively. Sometimes it’s worked, but there’s a lot of training that somebody needs. There’s also access to some of our data. You need background checks and things like that. We don’t want to impact the rigor, but it is something that we’re still interested in and figuring out how we can leverage. It just needs a lot of kind of support to kind of maintain that. And I think another part of the ecosystem that’s different for us somewhat is that the user base we have is pretty wide. So we have just the general public coming to our site. We have advocates, journalists, academics, legislators, people that are practitioners within the system. And again, you know, because we give the data away, so it is a little bit more difficult to really kind of measure that impact and use.
Sean [00:08:56] Yeah, well talk about an incredible problem to solve with software in such a dynamic space. It seems like you have an infinite amount of work in front of you. And that’s got to be a challenge, like just to look at the broad scope of what has to be done to really make the criminal justice system as transparent as it should be.
Andrew [00:09:14] It is a really broad problem, right, and it’s a very hard problem. We launched our data portal with six states. We are up to eight states right now. We’re going to reach 20 states by the end of this year and that’s a goal we set a while ago, is to get to twenty states. We’re actually going to be launching three new states on Monday. And I think one of the issues is that because of the situation we’re in, we’re collecting data from these kind of resource-constrained counties or states and we have to take whatever data, really, they can provide us. We can’t really force them to clean their data in a way that would make it really nice for us. And so we really take that burden on ourselves of taking whatever data we can get and then there’s a huge amount of effort. So much of the work that we do is really taking that data in and normalizing it into the format that we can build our measures off of. And I think that’s an area that we want to do a lot of work in.
Andrew [00:10:03] First off, there’s a lot of work that we need to do to figure out how to continue scaling that up and scaling that pipeline so that we can kind of repeatedly collect that data and keep it up to date and collect a broader set of data moving forward. But what we want to do is figure out a way to kind of change our relationship with the people that are providing the data and I think really kind of improve the value proposition for them so that there’s much more incentive for them to provide us the data. And we think that we can use that to help them give us better data. And so we hope it’ll set up a virtuous cycle where what we’re providing them is seen as more and more valuable over time so then they will be providing us with better data, which will kind of make it easier, which will kind of help us be able to provide more data and more insights.
Sean [00:10:43] So with that statement, it’s really a broad problem to solve and a broad set of things to get to the solution. How do you guys manage prioritization? You also have a complicated set of stakeholders. I mean, you have investors that are literally providing this bundle of cash for you to operate from. And then on the flip side of that, the organization is supposed to be serving the people at the end of the day. And that in itself is a broad set of stakeholders. So I’m just curious. With so much to do, how do you manage priorities?
Andrew [00:11:10] You know, currently we’re mostly funded by large foundations and a lot of those foundations are invested in our mission and they’re interested in what we’re doing. And so, you know, I think it’s a back and forth where we have kind of communicate what we think is possible and they’ve kind of invested in the idea. And some ways that can drive some of our priorities, a lot of our priorities. I think one of the challenges is that really to get this whole enterprise in place really required a lot of infrastructure and a lot of kind of non-interesting, sexy work to kind of like get the baseline in place. You know, with data, it seems like ninety-five percent of the work is just drudgery to kind of get the data in the state for that like five percent where you can start doing cool analytics on it and actually getting some insights. And so we’ve got to get them both invested in the entire process that we’re trying to build, but then also with the ultimate outcomes that we’re trying to get at.
Andrew [00:12:04] So that’s one aspect of it. Also, we’re trying to serve the public and trying to get the data in the public’s hands. So the type of customers we have are both the public at large and interested advocates, but they’re also practitioners of the system. So prosecutors or public defenders or legislators are looking at our sites. So we have to kind of build a system that meets all of their needs and that can be a difficult balancing act. Like the way we describe our measures in a rigorous, complete way that is completely accurate and factual, sometimes that is in conflict with actually making it understandable and easy to read and kind of accessible, right. Another aspect of that is, a lot of practitioners early on when they were looking at our data were concerned that people would be cherry-picking the data. They would just be looking for one data point and kind of zeroing in on that, and so we have set up the data portal to provide a lot of contextual information. So when you’re looking at any one measure, we also provide companion information and contextual information about that county and also provide legislative information. So if you’re looking at a measure that is talking about somebody who was in jail for not being able to pay low bail, we’ll also provide some state statutory information to get you a sense of like, you know, what are the conditions for where somebody is offered bail or not offered bail? Or are there different laws around when somebody can be deferred pre-trial or something like that? So we try to provide a larger picture than just a particular measure.
Paul [00:13:33] So I want to dig in on potentially something a little bit sticky, but you’ve been talking about better data and I’m far enough down the Dunning-Kruger curve to know what I don’t know about data science. I’m not a data scientist and I don’t play one on TV. But I’m curious, how do you ensure that rigor? How do you prove out the, to borrow from Kasia Chmielinksi’s recent discussion, the nutrition of your data? How do you talk about whether or not you’re influencing or cherry-picking or biasing data in some way? Is there a process that you go to to review that the virtuous cycle isn’t actually influencing something in ways that you don’t intend, consciously or subconsciously?
Andrew [00:14:10] You know, we have a lot of discussions about this internally. And, you know, it starts with our measures and we argue internally, like, the measure itself actually has bias, because what we’re choosing to measure says something about who we are and what we’re looking at. And so initially, we had a large number of measures. We had like a hundred and twenty or something measures that we came up with. And these were created by some of the top criminal justice academics in the country. And we presented them all over the country to lots and lots of practitioners and academics and journalists and got more and more feedback on those. And we really started whittling those down to what universally is starting to become agreed upon as a fair set of measures.
Andrew [00:14:49] You know, it’s super important for us to remain unbiased because both practitioners within the system and the public at large is really depending on us on being that fair player, right. And so we really need the practitioners to trust us, to give us the data, to know that we’re not going to be trying to take that data and like cherry-pick it in a way that is intentionally trying to twist it to make them look bad. But we also need to be able to present that data to the public in a way that they can trust us and not think that we’re just trying to be spin masters or something for the practitioners. All of them agree that we need to play this role, so that helps a lot. I think we are comfortable with really advocating for data transparency. I think that’s an area where we feel that we can be unbiased, or be biased, I should say.
Andrew [00:15:31] We’re very biased. We believe that data transparency is valuable and it’s an important thing and we’re trying to get everybody on board with that. And what’s really interesting about data transparency is that it seems to be an issue that both the left and the right really agree on. The left and right agree on data transparency and puppies are cute and the sky is blue. I think those are the three things that they agree universally on, but it seems that that’s one. So that’s fantastic. So we’re biased there. We advocate for that. We’re actually launching a new whole sub-section of our website that’s called the State of the Data. And what we’ve been doing is really looking comprehensively across the country for what legislation exists around data transparency and where the data sits. And then also we set up a set of actions and kind of next steps for somebody coming to the site to advocate for passing legislation in their state and we provide model legislation and some of the benefits of doing that. So that’s an area where we’re going to be doing a lot of advocacy in the future.
Paul [00:16:28] That strikes me as sort of core to the vision. It comes through right from the start, right in Amy’s book. When I was reading the book, one of the things that jumped out at me is just how imperfect the system is, how at times, the way that we have things set up, to the untrained observer, it seems to be a system handed down from on high. But it’s really a system of humans. It’s made up of people. It’s flawed, but it’s something. And I think that that is sort of the reflection of MFJ to some extent because you have an imperfect data set, but it is getting more and more perfected as you get more sources and more angles to look at details. So I’m curious when you’re looking at the pixel level, how do you decide what the experiences are? Because the charts that you show on a dashboard influence what’s going through the user’s mind. How do you choose what gets displayed? Because what you measure is what gets influenced.
Andrew [00:17:19] Yeah, so this is obviously an area of concern that’s focused on a lot and, you know, to start off, yeah, the system is really imperfect and the data is incredibly messy and it starts with spending a lot of time on figuring out how we can standardize and normalize that data in a way that you can actually compare one county to another. And that took us years, really, to develop the methodology to be able to do that comparison. Early on, when we developed the measures, we started presenting them and there was a powerful circuit court judge. We presented just an early prototype. We didn’t even have data. So we were just like, “here’s the concept of how we might be able to compare two counties.” And he said, “well, you can’t compare two counties; they’re completely different things. One is like an orange, one is like an apple. There’s no way to compare those.” And we argued back, “well, you can compare everything else in our society, right? We compare schools and we compare hospitals and we compare lots of things.”
Andrew [00:18:14] So really, the first part was really getting a set of measures in place that were fair and that you could compare across counties. And then there is a lot of thought about how we present that data back to the user. And so all of our data can be broken down in lots of different ways, but the key way is you can break it down by race, so you can compare how white people are compared to people of color as they move into the system. And you can also compare indigent status. And so indigence is if somebody is assigned a court-appointed attorney, it’s kind of a measure of poverty. And both of those measures really get at really some of the fundamental concerns that people really have about the criminal justice system. Like, are people being treated fairly based on their skin color or based on their economic status? And there was a lot of work around how we present disparities fairly.
Andrew [00:19:04] And what we do is we’ve settled on something called a relative rate index. And essentially what that’s doing is it’s looking at people once they enter the system. So let’s say we’re looking at a prosecutor’s office. Once they enter the prosecutor’s office, although that particular county might have a very small African American population or might have a very large African American population, obviously the numbers are going to be quite different of what it looks like based on what the mix of the county is. But the way we present the data is once the person is in the prosecutor’s office, we structure our measure so that it’s looking at how they’re treated from that point forward. So even though African Americans in a particular county might be a very small percentage, we’re seeing, like, of those that showed up at the prosecutor’s office, what percentage of that percentage was deferred into some alternative prosecution or which percentage of those were stuck in jail because they couldn’t pay a low level of bail? So I think that that’s one of the really key ways we tried to address that. And then also, as I said before, like providing the contextual information and the statutory information to help give a broader view of what’s going on.
Sean [00:20:06] So the stakes are high. And I always think of this dilemma between, ‘imperfection is the enemy of progress,’ right, versus ‘better is the enemy of great.’ And you have this constant balance, like how do you manage? What are you going to put out there in the world, like what you’re finding and making sure that the data is right enough?
Andrew [00:20:29] So the process we go through is really involved with the data. So it starts with building a relationship with each of the agencies, often at the state level, but sometimes at the county level, to collect the data. And then, when we bring it in, you know, as I said before, each data bit that we collect is like a snowflake and they’re all unique. And so we’ve built a process to try to normalize that and we spend a lot of time up-front doing analysis of the data and then we collect any questions we have about the data and then reach back out to the providers of the data to say, “are our assumptions correct on what this data says?” Then after that, we do a lot of work to kind of normalize the data so we can build our measures off of it. And then we publish that data out to private data portal and give the agencies early access to it to kind of audit the data and give feedback and say, “does this look right?” And I should also say that our organization, we have a lot of software developers. We also, like, have an equal mix of researchers and people with criminology degrees that are really meticulous in looking through and understanding the data. In order to do this work, we really need a lot of domain experts that really understand the data because so often the data is messy and needs to be interpreted to understand what was being said. So they’re doing a lot of work to code this data.
Andrew [00:21:52] We really kind of look at the data at the case level. So we’re trying to really follow what happened to a case across multiple agencies. And often the charged description is just written in pretext. So the exact same crime might be specified 50 different ways because different people are typing it in, they’re using different abbreviations…
Sean [00:22:12] Even in different states they call things different, right?
Andrew [00:22:14] Across states it’s different, across counties, within a state it’s different, within a county it’s different depending on who might be entering the data on that particular day. This is one of our biggest problems that we’ve been trying to solve, and we’re getting better and better at it. We have actually invested a lot of human time in classifying and categorizing and looking for outliers and trying to figure that out. With all of that knowledge we’ve gained in this human work to kind of classify these charges, we’ve been taking that data and we’ve been working with some experts in A.I. to help us build a machine learning model. They’re this organization called CJARS at the University of Michigan, and they are helping us build a machine learning model to really do that classification and automate it. It’ll never be fully automated. We’ll always have to have some human involvement in the process to kind of look at outliers and kind of work with those, but that is one of the key areas where if we can get that kind of classification of the charges, that will really help us scale our process a lot.
Sean [00:23:11] I’ve learned to never say never in the technology world, but I agree. I think it’s going to be a long, long time before that sort of stuff is fully [developed]. What a great use of that technology to help augment our fixing of this really big societal problem.
Andrew [00:23:25] You know, it’s interesting, if we can’t even figure out what people were charged with and a standardized way to compare, how are we going to ever do the comparisons? It really starts with that, right. Other types of problems along those lines we have is identifying a particular person across the data set. So we collect data from courts and from jails and prosecutors and there’s no universal identifier for a person necessarily, right. So they might have a particular ID in one system, but they don’t have the same ID in the other system. They’re just identified by their name and their birthdate, but sometimes the middle name is abbreviated, sometimes it’s fully spelled out. Sometimes suffixes like junior or senior are added on, and then sometimes birthdates get typed in incorrectly. So just even knowing who the person is and following them through the system is really difficult.
Andrew [00:24:06] But here’s the fascinating thing is that all of these systems are silos. You know, the prosecutor has their own system that they built and it’s the system they chose, right. And the next prosecutor in the next county might have picked a completely different system. And then within any one county, each of those systems are completely independent. They’re not connected at all. And these systems were built really for case processing, to manage the cases as they move through the system. They weren’t even built with the idea that “oh, we’re going to go back and dig into this and do some analysis on that.” So what’s interesting is what we’re finding now is as we are working more closely with prosecutors especially, and measuring, we’re finding that it’s actually starting to influence the data and the way that they collect it so that they can get more accurate measurements back out. And so there’s like really another kind of like virtuous cycle where the act of measuring actually helps improve the data because it incentivizes you to improve your data so that it becomes more accurate over time.
Andrew [00:25:01] You know, when we initially set out to do this work, we had some early thoughts about like, “oh, we’ll just create a national data standard that everybody can adopt and we’ll just like work through the federal government to get everybody to implement this.” We realized that was never going to happen, right. Or it will, but we need like 50 years to make it happen. But what’s interesting is the measures themselves have become kind of this de facto standard that people are starting to agree upon and now we’re seeing the data is starting to improve so that we can measure it more accurately.
Sean [00:25:31] I’d just like to poke lightly on your experience with workshopping. You’ve had some experience with IDEO. You guys have done a fair share of things internally. What are some things you’ve learned in that space since you’ve been working with MFJ in terms of how to how to guide the team or gain alignment?
Andrew [00:25:47] So our goal is really to get 20 states out this year. So for the last year, we’ve been thinking a lot about, “where do we go next and how do we move forward?” And we had kind of an early seed of an idea last year that what we want to try to do is bring our national data portal down to the local level and kind of embed that data right, maybe at the prosecutor’s website, and hook directly into their data system and give them analytics, like real-time on their own system. So we were trying to figure out, like, what that looks like and what that system could be. We did a workshop with ITX that you guys generously donated. And Sean, you helped lead us through the process of thinking about what that product could be. And working through that process with you, I think it really helped us, you know, we built a mission statement for this product and it seems like such a simple thing to do. But just building that mission statement helped really crystallize the idea that we were trying to build and it also helped get all of our senior leadership on the same page. And so what that workshop looked like is we brought our senior leaders together, we brought a practitioner, and we talked about the problems and we talked about how Measures for Justice could use our expertise in criminal justice data and in measurement to help the prosecutor make better decisions and how we could take that data and make it available to his local community to help them understand what the prosecutor was doing.
Andrew [00:27:21] And the prosecutor that we were working with was very interested in being a transparency leader. And so pulling all this together helped us kind of crystallize this idea of creating what we’re calling a Community Transparency Portal. And the idea is to really take our data portal, make it available to a county, and allow a prosecutor to really completely show, like, how cases are moving through their system, how defendants of color are experiencing the system versus white defendants, and how they completely move through the system. And so who’s getting deferred, who’s getting bail, who’s getting prosecuted, for what types of crimes? And we think that this is a really exciting opportunity to really create some more actionable data and really engage the public with criminal justice data and help answer some of the questions that the community has had. And I think that prosecutors are concerned that these stories come along that are kind of one-off anecdotes where something really horrible happens and they don’t believe that it is representative of all of the work that they do and that sometimes problems happen. And what they would like to do is kind of present, “here’s everything we do, our cases are moving through, how we’re treating our defendants, and we believe that we’re treating them fairly.” I think the public really is interested in getting a lot more insight. And what we want to do is really create a data-informed conversation that’s based on the facts and not just on anecdotes between the public and the prosecutor. And hopefully that conversation can help lead to policy changes.
Andrew [00:28:53] And also what we ideated out of these workshops is that there’s an opportunity to even get the prosecutors to, in some cases, set a particular goal, or maybe present a goal that they’ve had internally, to the public, and let the public see how they’re tracking on that particular goal. So one of the prosecutors are working with in Yolo County, California, is setting a goal of diverting up to 10 percent of felony defendants in the next 18 months. And that’s a personal goal that they decided to set and that they had already set internally. But now we’re bringing that goal out to the public and letting them see it and kind of watch the progress as that happens. And what’s great about that is as the prosecutor meets that goal, it’s a great way to say, “Hey, this is how I’m addressing some of the problems that I’ve seen,” but it also helps to hold them accountable, right, to once they set that goal, it kind of creates a situation where they feel obligated to actually meet that goal, right.
Sean [00:29:49] Right. If I could summarize, you guys used a workshopping process to set a clear goal that everybody’s aligned on and that the process of going through and gaining alignment allows you to confidently assemble a plan together, that it sounds like you were able to get commitment from everybody. You know, in my opinion, that combination of alignment, competence, commitment amongst the team is like, that’s the power combination of things.
Andrew [00:30:15] Absolutely. Like, it was an idea that was not fully formed until the workshop helped crystallize the idea and get everybody bought into it, understand what we were actually trying to do, and then work together to kind of achieve the goal of it. And I think it was building that mission statement. It was digging into truly understanding who we were building this for because I think there was some, “are we building this for prosecutors, are we building this for the public, are we building this for the media, are we building this for our funders?” Right, and helped us zero in on the fact that we are building this for the public and it helped us understand what the motivations and, for what the motivations are for the public, how we can build trust for them and get them to become advocates for what we do and ideally be really vigorous advocates for what we do. And the workshop was critical, like, I think that this idea would still be…
Sean [00:31:07] Brewing.
Andrew [00:31:08] KInd of just, yeah, it would just be flopping around if we hadn’t taken that time. You know, it just was to take that time in a structured way to have that conversation and get everybody there, away from the office, away from Slack, away from everything, to just really have the long, hard conversation and work through it.
Sean [00:31:24] And I think that’s so important for product teams, especially when you have such a diverse set of stakeholders. Get them all in a room to really work through, “who are we solving problems for? What are these problems we’re going to solve? How are we going to prioritize things?” Really to empower the team to be able to kill it. So thank you for that.
Paul [00:31:40] So as we’re coming up on the end of our time together, I got a hot take for you. I’m curious if you could share for our listeners, what is your definition of innovation? How would you define that?
Andrew [00:31:51] Wow. I guess, you know, what really excites me is building something that people are excited about using. And I think it almost goes back to some childhood need to, like, impress and excite people, right. To me, innovation is taking a problem and solving it for somebody in a way that they’re excited about and that you’re excited about and that you can, you know, show off to your friends and get people excited about this thing that you’ve solved. For me, it’s not only just solving that problem and seeing how it excites people to use it but to do it in a way that is kind of beautiful and compelling. I guess that’s for me, that’s what that’s what excites me about product is simply building something cool for somebody that they’re excited about.
Paul [00:32:37] It’s great. So like if a tree falls in a forest and nobody’s there, does it make a sound?
Andrew [00:32:40] Right, who cares?
Paul [00:32:43] If you solve a problem but nobody’s excited about it, did you really solve it?
Sean [00:32:47] Is it really an innovation if nobody saw it? That’s great. We always wrap up with a question to try to understand what you’re learning in our industry. What’s the last book you’ve read in the product management space that you think has been helpful for you?
Andrew [00:32:59] Well, it’s an old book, but I had been reading Good to Great recently and I think that that is still an incredibly valuable book and I got a lot of great insights on that. I think our organization, we’re really going through a whole reorg to reorganize and become a product-focused organization with the goal of, how do we organize every aspect of our company towards creating great products for people? And that book is giving me some really good insights.
Sean [00:33:26] That’s a fantastic recommendation.
Paul [00:33:30] Well Andrew, thanks so much for taking the time today. It’s been a blast getting to know you a bit better and hearing what drives your organization. It’s really meant a lot to spend some time together. So thanks for joining us.
Sean [00:33:38] Yeah, we’re big fans of the org, keep doing the work that you’re doing. You guys are doing incredible things for the world. So thank you for that.
Andrew [00:33:44] Oh, thanks so much for having me. I really appreciate it.
Paul [00:33:47] Cheers.
Andrew [00:33:47] OK, bye-bye.
Paul [00:33:50] Well, that’s it for today. In line with our goals of transparency in listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get, so please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world, just like you’re doing. Thanks, everyone, we’ll see you next episode.