I had a hand in ZamiaCAD. It is hyped for its scalability - there is a manager that breaks large objects and passiviates (aka persists) them in a custom database so that you can design arbitrary large circuits with moderate amount of RAM. However, the question appears immediately: why not to leave this task to OS? After all, main memory paging is one of the OS primary duties. Why did ZamiaCAD programmers needed to reinvent the wheel?
The standard answer is that app-tailored paging can be more optimal. But, I am not sure. What I am sure is that it is a big pain for programmer! Now, for every operation, he must decide all the time if the passiviation of the object is necessary and what objects are referred from the object at hand, which referenced may also need the persistence? Might be it is not that difficult for the little genious inventor of Zamia because he is a little genius but, for average person, it is a major inconcievable obstacle to contributing.
So, why the little genius decided to do this terrible thing, which kills his application? The problem is not that he wanted to optimize the paging. But, he had no other option since as soon as JVM heap exeeds the physical memory, thrashing slows down the program execution to a full stop ("the whole program falls into a thrashing hole", as they put here). So, initially relieved from the demand to release the memory, java programmers end up with an extremely complex custom passiviation mechanism, specific to every their application thereby. You cannot create universal POJO-graph partition.This makes java processing of large models a nightmare.
You cannot describe the large models directly in java. To passiviate objects, you need to replace the POJO java references with database identifiers. You can emulate such db-references, creating your own virtual machines, which is involved and makes programming more complex and error-prone because you have to give up the Java static code checker and sorta invent your own language. The emulation will additionally slow down your execution speed order of magnitude.
The standard answer is that app-tailored paging can be more optimal. But, I am not sure. What I am sure is that it is a big pain for programmer! Now, for every operation, he must decide all the time if the passiviation of the object is necessary and what objects are referred from the object at hand, which referenced may also need the persistence? Might be it is not that difficult for the little genious inventor of Zamia because he is a little genius but, for average person, it is a major inconcievable obstacle to contributing.
So, why the little genius decided to do this terrible thing, which kills his application? The problem is not that he wanted to optimize the paging. But, he had no other option since as soon as JVM heap exeeds the physical memory, thrashing slows down the program execution to a full stop ("the whole program falls into a thrashing hole", as they put here). So, initially relieved from the demand to release the memory, java programmers end up with an extremely complex custom passiviation mechanism, specific to every their application thereby. You cannot create universal POJO-graph partition.This makes java processing of large models a nightmare.
You cannot describe the large models directly in java. To passiviate objects, you need to replace the POJO java references with database identifiers. You can emulate such db-references, creating your own virtual machines, which is involved and makes programming more complex and error-prone because you have to give up the Java static code checker and sorta invent your own language. The emulation will additionally slow down your execution speed order of magnitude.
No comments:
Post a Comment