Constant pool size limit exceeded


I encounter an issue since yesterday when building a (small ~10KLOC) project using Tapir.

I am used to define my tapir objects (query params, path params, jsonBody) in a trait, that I extend later with an object where I write the endpoints. The models that are used are located in other packages.

Unfortunately, I cannot compile this trait anymore.

The compiler complains about the fact that the class is too large. The constant pool size is over the 64k limit.
I have been able to make it compile, with a workaround, by splitting some parts of this trait (changing it to an object, importing each parts in the final object that defines the endpoints), but adding some enum values and case class to my existing models broke it again.

The error I get :

[error] Class '$' is too large. Constant pool size: 80316. Limit is 64K entries

On a successful build I’ve used javap on the class. The last last lines were :

#65273 REF_invokeSpecial com/my/path/route/CommonRoute$.$anonfun$6947:(Lscala/collection/immutable/Map;Ljava/lang/String;)Lscala/Option;
#65277 REF_invokeSpecial com/my/path/route/CommonRoute$.$anonfun$6948:()Lscala/collection/immutable/List;
#65281 REF_invokeSpecial com/my/path/route/CommonRoute$.$anonfun$6949:()Lscala/collection/immutable/List;
#65285 REF_invokeSpecial com/my/path/route/CommonRoute$.$anonfun$6950:()Lscala/collection/immutable/List;
  Scala: length = 0x0 (unknown attribute)

As we can see, it was near the limit. I also can see thousands lines similar to these ones.

I use Scala 3.2.1, tapir 1.2.4.
All my dependencies are up to date.
I use monix newtype, refined (and make a deep use of them), circe, cats and their associated codec integration.

Am I doing something wrong?

Because this file (and the API) is small.
20 jsonBody, less than 10 query/ path params.
60 case class:

  • 20 are simple wrappers for the input parameters of the endpoints (so when i have a “getSomething” endpoint it takes a GetSomething(…) case class as input.
  • 2 sealed trait are extended by 10 case class (with 3 fields) each (so a total of 20 case class)
  • remain 20 case class with an average of 5 fields
    These case class are sharing ~30 fields monix+refined and 5 Scala 3 enum.
    5 Scala 3 string enum with less than 10 values each

After some final tests, just after having written this post, and just before submitting it I think the issue comes from the way the Scala 3 string enums are handled. When I add an extra value to an enum, it takes ~950 entries in the pool.
I also noticed something surprising. When removing enough enum values to make the trait compile, then the incremental compilation handles the new values without error.


These types of errors usually occur because of auto-derivation of codecs & schemas. Are you using auto-derivation, or the semi-auto one?

If some enums or case classes are used multiple times, with auto-derivation the schemas describing them will be generated (by the macro, hence generating bytecode) multiple times. For such cases, it might be beneficial to define their schema by hand:

implicit val schemaForMyEnum: Schema[X] = Schema.derived[X]

The val will ensure that the schema is derived once and reused. I think this should work fine with other schemas/codecs being auto-derived.

As a work-around, it might help splitting the definitions among several objects.

For a more thorough investigation, we would need a reproducing example.

Thanks Adam.

That was exactly my problem.
I’m using generic auto and adding explicitly the implicits allows the trait to compile again.