Below is the summary on different garbage collectors prepared while reading Memory Management in the Java HotSpot™ Virtual Machine whitepaper. Hope you find it useful!
Notable points in Concurrent Mark-Sweep Collector:
- Also known as low-latency collector as it minimizes pause times
- Some phases are executed concurrently with the application
- Only garbage collector that does not perform compaction of old generation after major collection
- Because no compaction is performed, extra heap sizes may be required
- Has an option of running in Incremental Mode (–XX:+CMSIncrementalMode): in this mode concurrent phases are done incrementally (useful when running on machines with fewer processors to yield back processing to the application)
Click here to see slides of “How to write: Clean, Testable code” presentation by Miško Hevery given at Google NYC Tech Talks meetup.
Also checkout Guide: Writing Testable Code on author’s website.
Below is a sample configuration of a connection pool using OracleDataSourceFactory:
Some points to note:
- Attribute factory is set to oracle.jdbc.pool.OracleDataSourceFactory. If the factory attribute is not specified, connection pool will be configured using default commons DBCP BasicDataSourceFactory
- user name is set using attribute “user“.
User name for connection pool using BasicDataSourceFactory is specified using attribute username. Why the difference? It is because the attribute user or username is not Tomcat’s resource element specific attribute, but rather the attribute specific to the factory. The factory could be written to look for user name in an attribute of its choice. For example, the factory could be written to look for user name in attribute “dbuser”. OracleDataSourceFactory looks for username in the attributes user, userName (note the capital N) and u. Below is the screenshot of code in OracleDataSourceFactory class decompiled using Java Decompiler
Say that you have two files file1.txt and file2.txt as shown below:
Using commands sort and comm, we can identify the lines that are common in both files, lines unique in file1 and lines unique in file2. Below are the steps:
- Command comm works on sorted files, so the first step is to sort both file1.txt and file2.txt
- sort file1.txt > file1_sorted.txt
- sort file2.txt > file2_sorted.txt
- Find lines common to both files
- comm -12 file1_sorted.txt file2_sorted.txt | nl
- Option “-12” means suppress unique lines in first (file1_sorted.txt) and second (file2_sorted.txt) files.
- “| nl” — add line numbers to output
- Find lines unique in file2_sorted.txt
- comm -13 file1_sorted.txt file2_sorted.txt | nl
- Option “-13” means suppress unique lines in first (file1_sorted.txt) file and lines common in both the files (3 – lines common in both files)
- Find lines unique in file1_sorted.txt
- comm -23 file1_sorted.txt file2_sorted.txt | nl
- Option “-23” means suppress unique lines in second (file2_sorted.txt) file and lines common in both the files (3 – lines common in both files
Below is an example first line written in JSP Tag Library Descriptor (TLD):
<!DOCTYPE taglib PUBLIC "-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.1//EN" "web-jsptaglib_1_1.dtd">
Note that the declaration doesn’t have the full path to the DTD like “http://abc.com/xyz/web-jsptaglib_1_1.dtd”. So, how does the XML parser locate the DTD? Even if the full URI is specified, does the parser fetch the DTD from the web server always?
XML Parser resolves public identifier “-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.1//EN” to resource javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd. This DTD is loaded from jsp-api.jar.
Doctype declaration format:
<!DOCTYPE rootElementName PUBLIC “publicIdentifier” “systemIdentifier”>
For the DOCTYPE declaration in TLD:
- publicIdentifier=”-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.1//EN”
- Public identifier “-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.1//EN” is registered with the resource URL “/javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd” using EntityResolver (SchemaResolver implements EntityResolver) in DigesterFactory.java in catalina.jar.
- From javadoc of method org.apache.commons.digester.Digester#register():
Digester contains an internal EntityResolver implementation. This maps PUBLICID’s to URLs (from which the resource will be loaded). A common use case for this method is to register local URLs (possibly computed at runtime by a classloader) for DTDs. This allows the performance advantage of using a local version without having to ensure every SYSTEM URI on every processed xml document is local. This implementation provides only basic functionality. If more sophisticated features are required, using setEntityResolver(org.xml.sax.EntityResolver) to set a custom resolver is recommended.
- XMLReader is set with this entity resolver in Digester.java in tomcat-coyote.jar
- XMLReader will resolve the registered public identifiers to resource URIs using the entity resolver
I noticed the error below in Tomcat logs while looking for some information — application was working normally even with this error. Another instance of Tomcat (same version – 6.0.16) that has the same web application didn’t have this error in the logs.
ERROR [main] (Digester.java:1555) - Parse Fatal Error at line 2 column -1: Relative URI "web-jsptaglib_1_2.dtd"; can not be resolved without a base URI.
org.xml.sax.SAXParseException: Relative URI "web-jsptaglib_1_2.dtd"; can not be resolved without a base URI.
Though the application was working, I wondered if any JSP custom tags were not working as expected; some of the custom tags were non-ui related so it will not be obvious to end users when some tags don’t work the way they should. Below are the findings I made while debugging this issue:
- Crimson parser was used to parse the tld
- The web application that I deployed didn’t have any crimson related jars. I verified that crimson related jars are not in TOMCAT_HOME/lib as well. Puzzled, I ran “findjars crimson” in TOMCAT_HOME and found another web app deployed on the same Tomcat has a jar having crimson parser classes and file META-INF/services/javax.xml.parsers.SAXParserFactory with value “org.apache.crimson.jaxp.SAXParserFactoryImpl”.
- Javadoc for method SAXParserFactory#newInstance() has details on how a SAXParserFactory implementation is chosen. From javadoc:
Use the Services API (as detailed in the JAR specification), if available, to determine the classname. The Services API will look for a classname in the file META-INF/services/javax.xml.parsers.SAXParserFactory in jars available to the runtime.
- Crimson parser from jar in another web application (say webapp1) was used to parse tld in the webapp (say webapp2) I deployed. Per Tomcat’s class loader documentation jars/classes in one webapp are not visible to another webapps deployed on the same Tomcat. So I suspected this to be a bug in Tomcat.
- I looked into Tomcat’s source code to see if I could spot the bug in code. I noticed getParser() method in Digester.java in tomcat-coyote.jar is checking if the parser is not null.
Looking at the source code I guessed if Tomcat deploys webapp1 first, xml parser in Digester will be set to crimson parser and when webapp2 is deployed the parser variable will not be null so crimson parser will be used for webapp2 as well. But this didn’t sound right because a jar in one webapp is deciding what parser to use for processing tlds in another (possible all other) webapp.
- I found Bug 29936 – XML parser loading problems by container and it seemed like related to the problem I am having. This bug fix was included in Tomcat 6.0.18, so I deployed the web applications in Tomcat 6.0.18 and I didn’t see the error message in log file!
- From Comment 8:
A possible solution is to load the (default) parser into the Digester prior to
having it being loaded by the WebappClassloader. Since this appears to be
one-time settable, it will use this parser regardless of what the webapp has.
I was really interested in seeing the code changes made as part of this bug, but I couldn’t find a link or documentation that details what code changes were made. After trying out a few things I found the changes here. (See this post to know how I found this link.)
So, upgrading to Tomcat 6.0.18 would fix the XML parser issue. I am still not clear if the application should work as expected even with the error we are seeing in log file on 6.0.16 — need to find that out!
While performance testing a web application deployed on Tomcat, we noticed that the average response time was about 0.5 seconds while some requests were getting response times of 2.0 seconds. In JConsole we noticed see-saw pattern in memory usage — more importantly we noticed that there were 20 major garbage collections and garbage collector used in PS MarkSweep.
Average time for a major garbage collection is 1.4 seconds. Since PS MarkSweep garbage collector pauses the application when performing a major garbage collection, the response times for requests arriving when major garbage collection is performed is request time + full GC time – i.e., 0.5 + 1.4 ~ 2.0 seconds
After increasing the heap size (thereby giving more memory for young generation and the two survivor spaces) no full major garbage collections and spikes in response times were noticed.
Rationale: When a minor garbage collection is performed, active objects in Eden and survivor space (holding objects) are copied into the empty survivor space. Any objects that couldn’t be copied into survivor space (because of it getting full) are copied into tenured region. After a few such minor collections, tenured region gets full and a full garbage collection is performed to free up space in tenured region. By increasing the heap space, survivor space is increased and the new size is large enough to hold all active objects when a minor garbage collection is performed and a copy to tenured region is avoided.