I think it's a great, it's a really good question. I think like, in general, if you can, if you can make the flagging decisions static in terms of, for any request for this version of the code, it's going to go this way through the system all the way through the system. That's the ideal because because then you get that audit trail via source control, right. So if, if I if my decision as to whether to show that social login button is powered entirely by configuration that's kind of maybe hopefully checked into the same repository as my code itself or maybe this is kind of system repository, then you get this awesome audit trail and available audit trail, you also get like nice things around availability because you don't have some external system you need to talk to etc, etc. That's great. And I would, I would kind of, there's a, there's kind of an argument for, for doing that in the 31st the toggles or the flags that that work that way. The problem is almost always there's some need for that, for those, that flagging configuration to be more dynamic. So either, so it's like an operations toggle, for example. The ideal if you've got a really good Continuous Delivery practice and I just talked to a company the other day that does this, if they need if they're their hair's on fire, and they need to turn off, you know, turn off the external tax calculation vendor or maybe, you know, switch from recommendation system. Recommendation system B because recommendation system A is, is eating all of the CPU in the system. If you've got really healthy CD practices, continuous delivery practices, you just update the configuration in encode and you run it through your delivery pipeline. And that's how you make that change in production. And if you can do that, then good for you. That's, that's amazing. That's awesome. The most real life organizations, they need like an OSHA capability to do it at runtime without having to to make that configuration change. So Matt case, you need it to be more dynamic. And if you think about it, in terms of things like a B testing, and if you think about it in terms of toggles that are used to kind of incrementally roll out, you know what, let's roll this. Let's roll out the social button to 10% of our users and make sure that we don't get any five hundreds and then let's roll it out to 50% of our users, using using feature toggles for feature flags for controlled rollout you generally need it to be more dynamic than a code change. And so in that case, you, you basically need to get that auditability. From the, there's two ways to get it.0:35:10
One way is to have an audit trail in whatever feature flag kind of management system you're using. So whenever someone updates, like what percentage of users should be getting this feature, or whenever someone kind of changes the flip the tab flag dynamically from off to on or vice versa, you you record that in some kind of audit log. So that's that's one thing and that's great. And could could well be useful for compliance reasons or whatever was probably more useful is observability. around your feature flag decisions. So at the point in which in the context of that you're making a flagging decision. So most commonly that would be while services serving a web request, you are deciding to do XYZ if you can include in your Logging in your metrics in your observability systems, that the state of those flags, then you get really, really rich insight, not just an audit trail as to what was happening, you know, this request had a 500 what was happening, but you, you get like, you get the ability to slice and dice and say, we're seeing latency is going up for a certain percentage of requests. What's the is that related? Is there any correlation between this increase in latency, and this feature flag that we flipped on five hours ago? That's like, that's like a real superpower, particularly if you're using if you're using feature flags heavily the ability to slice and dice your, your production system metrics, and ideally, your business metrics to write like, be able to be able to look at a graph that's that says our conversion rate or the click through rate on I recommend system dropped, like noticeably dropped in the last week? What feature flags? Did we change around that time or even better? Is there kind of like a correlation where the people with a feature flag on were behaving differently from the people that feature flagging off, that's super useful as a general capability. It's something that you need, if you're using these for a B testing, that's kind of the point is to say, what's the difference in behavior, depending on what the state of this this flag is. But if you generalize that, and again, I think this is a really good example of why thinking about feature flags broadly, you know, thinking about a B tests as being in the same conceptual bucket, or in the same context, contextual kind of area as a release toggle. We think about all of those the same way. Then you start saying like, Well, why don't we be like, why can't we do a B testing for a operational change? Or why can't we do abt For, for every feature, I think like Uber, I think it's boober have this kind of phrase saying that, like, every, every feature should be an experiment or something like that. So I think kind of what that gets to is the end of the day, you should you should be able to slice and dice, any change to the system and say, you know, the people that had this changed, how do they behave differently whether that was more errors, or increase latency or lower conversion in terms of people opting to put something into their cart? Like they're all fundamentally the same, the same kind of question that you're asking.