Major
Detail
Major
Detail
When the tl:KnowledgeBase creates a tl:TLObject, for example as the result of a search, it has to pick out the appropriate implementation class for its tl:TLClass.
Improvement
The mapping of implementation class to tl:TLClass should be cached. This way, the Java class does not have to be searched again for each TLObject.
Implementation
A cache has been built into tl:DynamicBinding. See: DynamicBinding._implementationClasses. The cache stores the mapping of the tl:TLID of a tl:TLType to the Java Class object of the tl:TLObject implementation. This de-facto stores something more than just the resolution of the qualified class name to the Class object, which further improves performance. On the other hand, it makes generalization more difficult. But there are currently no concrete plans to cache class resolution in other places anyway.
Background
In the context of #26910 it was determined that it is worth optimizing here. When sending a changeset of 300 000 objects, this place was responsible for 2/3 of the remaining runtime after other optimizations.
Example stacktrace
at com.top_logic.knowledge.wrap.binding.DynamicBinding.findImplClass(DynamicBinding.java:152) at com.top_logic.knowledge.wrap.binding.DynamicBinding.createBinding(DynamicBinding.java:133) at com.top_logic.knowledge.wrap.ImplementationFactory.createBinding(ImplementationFactory.java:53) at com.top_logic.knowledge.service.db2.KnowledgeItemImpl.createWrapper(KnowledgeItemImpl.java:46) at com.top_logic.knowledge.service.db2.WrappedKnowledgeItem.initWrapper(WrappedKnowledgeItem.java:28) at com.top_logic.knowledge.service.db2.DBKnowledgeItem.onLoad(DBKnowledgeItem.java:259) at com.top_logic.knowledge.service.db2.DBKnowledgeBase.createItem(DBKnowledgeBase.java:5158) at com.top_logic.knowledge.service.db2.DBKnowledgeBase.findOrCreateItem(DBKnowledgeBase.java:5122) at com.top_logic.knowledge.service.db2.DBKnowledgeBase.findOrCreateItem(DBKnowledgeBase.java:5090) at com.top_logic.knowledge.service.db2.MonomorphicSearch$FullObjectResult.findNext(MonomorphicSearch.java:321) at com.top_logic.basic.sql.ResultSetBasedIterator.findNext(ResultSetBasedIterator.java:44) at com.top_logic.basic.col.CloseableIteratorBase.hasNext(CloseableIteratorBase.java:31) at com.top_logic.basic.col.CloseableIteratorAdapter.hasNext(CloseableIteratorAdapter.java:29) at com.top_logic.knowledge.service.db2.DBKnowledgeBase.toList(DBKnowledgeBase.java:1926) at com.top_logic.knowledge.service.db2.DBKnowledgeBase.search(DBKnowledgeBase.java:1912) at com.top_logic.knowledge.service.BulkIdLoad.resolveIdentifiers(BulkIdLoad.java:186) at com.top_logic.knowledge.service.BulkIdLoad.loadUncachedInRevision(BulkIdLoad.java:126) at com.top_logic.kafka.knowledge.service.exporter.TypeFilterRewriter.resolveCallbacks(TypeFilterRewriter.java:161) at com.top_logic.kafka.knowledge.service.exporter.TypeFilterRewriter.rewrite(TypeFilterRewriter.java:149)
Test
TestClassCaching checks if ConcurrentHashMap.computeIfAbsent(...) is faster than Class.forName(...). Currently this is clearly the case. The latter takes more than 10x as long. (JDK 11, Linux)
Limitations: The map has only one entry and exactly this entry is always fetched. However, the performance of get(key) of maps is not very dependent on the fill level of the map. The general overhead to fetch an entry, as well as the synchronization due to **`Concurrent`**`HashMap` should be the deciding factor here. Additionally, the actual cache caches more than just this mapping, and should therefore save even more time.