This article is part two in a series describing the evolution of my knowledge and attitude about creating Drupal custom entities. In part 1 of this series, I explained what caused my interest in Drupal custom entities, my first, rather extreme, use of them where nearly all content on the site was a custom entity, and my conclusion from that experience. In this article, I will continue describing the other positions I tried and the resulting final position on when and how I'll use them in the future.
Phase 2: Custom entity avoidance
A few months after finishing the project described in part 1, I began work on another site that potentially seemed like a good candidate for custom entities. The site stored many calculation results that would be mostly "under the hood" information. By that I mean each of those individual calculations wasn't something you'd think of as a Drupal node with its own view and edit page, for example. In fact, this information would be calculated once and never subsequently edited at all, and it was hard in the analysis phase of the project to imagine a use case for viewing a calculation result as a standalone page either. So, this would seem an ideal candidate for being implemented with a custom entity.
However, at this time I still had a pretty bad taste in my mouth from the experience of maintaining the custom entities on that prior project. Because the project itself was planned to change significantly over time, there was no doubt that would include many changes to these entities. I considered the maintenance cost of custom entities to be substantial for this project. (See part 1 for details on what I'm talking about.) What's more, by this time I had a much better understanding of the Drupal cache system and how well it worked.
I decided to just use regular Drupal content types and nodes to store the calculations and see how it went. The calculations the site did were so complex and numerous that I built the recalculate process to work in "chunks" using the Drupal batch system so it would not timeout. Because of that, the performance overhead of creating many hundred nodes was not problematic from a technical standpoint. From the user's perspective, the visual cues of the batch system, like the progress bar, also helped masked that impact.
The remaining question was if there would be any performance issues when reading that data. The calculations were used on a dashboard-like page, with several sections each using a large number of individual calculation results to display either a report or a graph. I anticipated that the reading of several thousand of those nodes, added to the time to build up HTML for each dashboard section, would make the page unacceptably slow.
To my surprise, it worked fine. That is, the response time for the dashboard pages was well within acceptable limits. What made that possible was the Drupal cache system. It turns out that in Drupal 8 it is used very aggressively (i.e whenever possible) and is efficient. The relevant gist of it for this application is that each of those calculations was stored in the cache as a single entity and in practice did not require the complex joining from all the individual field tables for display.
My conclusion from this project was that for most sites, even those involving large amounts of pure data, Drupal's regular content type nodes were probably adequate. Additionally, using this architecture did not require any additional coding or knowledge beyond what is already used on nearly every Drupal site.