Hi! I am meditating over cpu profiles of an app running in production that uses tapir (with circe for json). I can see that a noticeable amount of samples are pointing at tapir Schema applyFieldsValidation method. The app does not specify any custom validations for the Schema instances of its models, all the schemas are derived automatically.
Meanwhile some default validations happen all the time. What are these validations? Are they really needed if all the input structure is checked to be correct by json codec (circe in this case)?
Looking at the code of hasValidation it looks like only SRef provides some validations by default.
Well there is this optimiziation which should only trigger validation if there’s any:
This relies on hasValidation - maybe this should be a lazy val, not def, to avoid recomputing. But that would need to be validated in some microbenchmark.
Can you check if your schema has hasValidation = true? Or what kind of validators are there in the object tree?
With SRef, it’s not really that validations exist - rather, at the point of checking if they DO exist, we don’t know what the reference resolves to. So the target schema might, or might not have validations. Hence the safe option is true, meaning in fact “I don’t know”.
Validations also do exist if validator != Validator.pass.
Ah, i see now. In my case validators are brought by enums as we define schemas for them like this:
given TapirSchema[MyEnum] =
TapirSchema.derivedEnumeration.defaultStringBased
Thanks!
Now I wonder what would be the best way to make a schema for enums so that it still provides a correct documentation, but does not introduce runtime checks.