Using Spring Handler Interceptors to Decouple Business Logic

Consider a Spring MVC application deployed in Tomcat servlet container. For each request, the following sequence of steps occur:

  • Tomcat gets the request and executes the configured chain of servlet filters
  • Spring Dispatcher Servlet receives the input request — assuming the request URL is configured to map to dispatcher servlet
  • Dispatcher Servlet maps the request URL to a handler using handler mapping– in other words, dispatcher servlet maps the request to an execution chain comprising a chain of Handler Interceptors and Controller. SpringUrlHandlerMapping can be used to map different URL patterns to different execution chains. For example, URL pattern /news/* could be mapped to interceptors AuthorizationHandlerInterceptor, RequestValidationHandlerInterceptor and NewsController.
  • View Resolver is used to resolve the view name returned by the controller and the view rendered

Below picture depicts the sequence of steps:

Spring Handler Interceptors
Spring Handler Interceptors

Let’s say that request processing logic for URL patterns /news/* and /premiumnews/* is exactly the same except that /premiumnews/* is accessible only for users with premium subscription. Logic to check if user is signed in and if user has premium subscription can be abstracted in PremiumSubscriptionValidatorHandlerInterceptor and included only in execution chain returned for /premiumnews/*.

<bean id="handlerMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
	<property name="interceptors">
		<list>
			<ref bean="premiumSubscriptionValidatorHandlerInterceptor"></ref>
		</list>
	</property>
	<property name="mappings">
		<props>
			<prop key="/premiumnews/*">newsController</prop>
		</props>
	</property>
</bean>
<bean id="handlerMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
	<property name="mappings">
		<props>
			<prop key="/news/*">newsController</prop>
		</props>
	</property>
</bean>

Note that both the URL patterns are mapped to newsController and premiumSubscriptionValidatorHandlerInterceptor is included only in the mapping for /premiumnews/*

Another example: say that there are both web and mobile versions for news content, and web version displays modules “Popular on Facebook” and “News Pulse” — content for these modules are obtained by querying a web service. Since these modules are displayed on web version only, logic to query the web service can be abstracted into an interceptor and included in execution chain returned only for web requests.

Hope you understood how to decouple business logic using handler interceptors!

Advertisements

Change Log4j Levels at Runtime Using Openutils-Log4j

Log4j Version 1.3 has a ConfigurationServlet which can be used to view/modify the current log4j logger settings for a web application. Per the information on Log4j website version 1.3 has been abandoned:

log4j 1.3 abandoned
Log4j 1.3 Version Development Has Been Abandoned

Openutils-log4j includes the configuration servlet that provides a simple web interface for changing the log4j levels:

Log4j Control Console
Log4j Control Console

With the configuration below added in web.xml, control console will be accessible using URL pattern /chlog4j (e.g. http://localhost:8080/testwebapp/chlog4j):

  <servlet>
   <display-name>Log4j configuration Servlet</display-name>
   <servlet-name>log4j</servlet-name>
   <servlet-class>it.openutils.log4j.Log4jConfigurationServlet</servlet-class>
 </servlet>

  <servlet-mapping>
  	<servlet-name>log4j</servlet-name>
  	<url-pattern>/chlog4j/*</url-pattern>
  </servlet-mapping>

Dependency to be added in pom.xml:

<dependency>
  <groupId>net.sourceforge.openutils</groupId>
  <artifactId>openutils-log4j</artifactId>
  <version>2.0.3</version>
</dependency>

Garbage Collectors in the J2SE 5.0 HotSpot JVM

Below is the summary on different garbage collectors prepared while reading Memory Management in the Java HotSpot™ Virtual Machine whitepaper. Hope you find it useful!

Garbage Collectors Summary: Serial, Parallel, Parallel Compacting and Concurrent Mark-Sweep (CMS)

Notable points in Concurrent Mark-Sweep Collector:

  • Also known as low-latency collector as it minimizes pause times
  • Some phases are executed concurrently with the application
  • Only garbage collector that does not perform compaction of old generation after major collection
  • Because no compaction is performed, extra heap sizes may be required
  • Has an option of running in Incremental Mode (–XX:+CMSIncrementalMode): in this mode concurrent phases are done incrementally (useful when running on machines with fewer processors to yield back processing to the application)

Related links:

How to Configure Connection Pool Using OracleDataSourceFactory in Tomcat?

Below is a sample configuration of a connection pool using OracleDataSourceFactory:

<Resource auth="Container"
name="jdbc/testDS"
type="oracle.jdbc.pool.OracleDataSource"
factory="oracle.jdbc.pool.OracleDataSourceFactory"
driverClassName="oracle.jdbc.OracleDriver"
url="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))(LOAD_BALANCE=yes)(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=XXXX)(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC)(RETRIES=180)(DELAY=5))))"
user="username"
password="password"
connectionCachingEnabled="true"
connectionCacheName="testCache"
connectionCacheProperties="{MinLimit=20,InitialLimit=20,MaxLimit=20,AbandonedConnectionTimeout=900}"
/>

Some points to note:

  • Attribute factory is set to oracle.jdbc.pool.OracleDataSourceFactory. If the factory attribute is not specified, connection pool will be configured using default commons DBCP BasicDataSourceFactory
    factory="oracle.jdbc.pool.OracleDataSourceFactory"
    
  • user name is set using attribute “user“.
    user="username"
    

    User name for connection pool using BasicDataSourceFactory is specified using attribute username. Why the difference? It is because the attribute user or username is not Tomcat’s resource element specific attribute, but rather the attribute specific to the factory. The factory could be written to look for user name in an attribute of its choice. For example, the factory could be written to look for user name in attribute “dbuser”. OracleDataSourceFactory looks for username in the attributes user, userName (note the capital N) and u. Below is the screenshot of code in OracleDataSourceFactory class decompiled using Java Decompiler

    Oracle Connection Pool Username

Relative URI “web-jsptaglib_1_2.dtd”; can not be resolved without a base URI

I noticed the error below in Tomcat logs while looking for some information — application was working normally even with this error. Another instance of Tomcat (same version – 6.0.16) that has the same web application didn’t have this error in the logs.

ERROR [main] (Digester.java:1555) - Parse Fatal Error at line 2 column -1: Relative URI "web-jsptaglib_1_2.dtd"; can not be resolved without a base URI.
org.xml.sax.SAXParseException: Relative URI "web-jsptaglib_1_2.dtd"; can not be resolved without a base URI.
        at org.apache.crimson.parser.Parser2.fatal(Parser2.java:3182)
        at org.apache.crimson.parser.Parser2.fatal(Parser2.java:3176)
        at org.apache.crimson.parser.Parser2.resolveURI(Parser2.java:2758)
        at org.apache.crimson.parser.Parser2.maybeExternalID(Parser2.java:2730)
        at org.apache.crimson.parser.Parser2.maybeDoctypeDecl(Parser2.java:1129)
        at org.apache.crimson.parser.Parser2.parseInternal(Parser2.java:489)
        at org.apache.crimson.parser.Parser2.parse(Parser2.java:305)
        at org.apache.crimson.parser.XMLReaderImpl.parse(XMLReaderImpl.java:442)
        at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1644)
        at org.apache.catalina.startup.TldConfig.tldScanStream(TldConfig.java:518)
        at org.apache.catalina.startup.TldConfig.tldScanTld(TldConfig.java:555)
        at org.apache.catalina.startup.TldConfig.execute(TldConfig.java:295)
        at org.apache.catalina.core.StandardContext.processTlds(StandardContext.java:4441)
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:4248)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)

Though the application was working, I wondered if any JSP custom tags were not working as expected; some of the custom tags were non-ui related so it will not be obvious to end users when some tags don’t work the way they should. Below are the findings I made while debugging this issue:

  1. Crimson parser was used to parse the tld
    at org.apache.crimson.parser.XMLReaderImpl.parse(XMLReaderImpl.java:442)
    
  2. The web application that I deployed didn’t have any crimson related jars. I verified that crimson related jars are not in TOMCAT_HOME/lib as well. Puzzled, I ran “findjars crimson” in TOMCAT_HOME and found another web app deployed on the same Tomcat has a jar having crimson parser classes and file META-INF/services/javax.xml.parsers.SAXParserFactory with value “org.apache.crimson.jaxp.SAXParserFactoryImpl”.
  3. Javadoc for method SAXParserFactory#newInstance() has details on how a SAXParserFactory implementation is chosen. From javadoc:

    Use the Services API (as detailed in the JAR specification), if available, to determine the classname. The Services API will look for a classname in the file META-INF/services/javax.xml.parsers.SAXParserFactory in jars available to the runtime.

  4. Crimson parser from jar in another web application (say webapp1) was used to parse tld in the webapp (say webapp2) I deployed. Per Tomcat’s class loader documentation jars/classes in one webapp are not visible to another webapps deployed on the same Tomcat. So I suspected this to be a bug in Tomcat.
  5. I looked into Tomcat’s source code to see if I could spot the bug in code. I noticed getParser() method in Digester.java in tomcat-coyote.jar is checking if the parser is not null.

    Looking at the source code I guessed if Tomcat deploys webapp1 first, xml parser in Digester will be set to crimson parser and when webapp2 is deployed the parser variable will not be null so crimson parser will be used for webapp2 as well. But this didn’t sound right because a jar in one webapp is deciding what parser to use for processing tlds in another (possible all other) webapp.
  6. I found Bug 29936 – XML parser loading problems by container and it seemed like related to the problem I am having. This bug fix was included in Tomcat 6.0.18, so I deployed the web applications in Tomcat 6.0.18 and I didn’t see the error message in log file!
  7. From Comment 8:

    A possible solution is to load the (default) parser into the Digester prior to
    having it being loaded by the WebappClassloader. Since this appears to be
    one-time settable, it will use this parser regardless of what the webapp has.

    I was really interested in seeing the code changes made as part of this bug, but I couldn’t find a link or documentation that details what code changes were made. After trying out a few things I found the changes here. (See this post to know how I found this link.)

So, upgrading to Tomcat 6.0.18 would fix the XML parser issue. I am still not clear if the application should work as expected even with the error we are seeing in log file on 6.0.16 — need to find that out!

Full Garbage Collections Manifested as Spikes in Respones Times

While performance testing a web application deployed on Tomcat, we noticed that the average response time was about 0.5 seconds while some requests were getting response times of 2.0 seconds. In JConsole we noticed see-saw pattern in memory usage — more importantly we noticed that there were 20 major garbage collections and garbage collector used in PS MarkSweep.

JConsole - Full Garbage Collection

Average time for a major garbage collection is 1.4 seconds. Since PS MarkSweep garbage collector pauses the application when performing a major garbage collection, the response times for requests arriving when major garbage collection is performed is request time + full GC time – i.e., 0.5 + 1.4 ~ 2.0 seconds

After increasing the heap size (thereby giving more memory for young generation and the two survivor spaces) no full major garbage collections and spikes in response times were noticed.JConsole - No Full Garbage Collection

Rationale: When a minor garbage collection is performed, active objects in Eden and survivor space (holding objects) are copied into the empty survivor space. Any objects that couldn’t be copied into survivor space (because of it getting full) are copied into tenured region. After a few such minor collections, tenured region gets full and a full garbage collection is performed to free up space in tenured region. By increasing the heap space, survivor space is increased and the new size is large enough to hold all active objects when a minor garbage collection is performed and a copy to tenured region is avoided.

JMeter Hello World – Submit Form

I used JMeter recently to generate load on a webapp for debugging a production issue. Below is a simple step-by-step details for submitting a form using JMeter

  1. Start JMeter
  2. Add Thread Group

     

  3. Define thread group properties.
    • Generate 50 current requests
    • Start with one request initially, generate two requests after two seconds from start time, generate three requests after four seconds from start time, etc.
    • Repeat for1000 iterations

     

  4. Add HTTP Request Sampler
  5. Define HTTP Request properties
    • Define the server name (localhost in screenshot below)
    • Define port number (8080)
    • Define path — relative path (/signup.do)
    • Change the method to “POST”
    • Add form fields (name and password)
  6. Add listener “Save responses to a file”
  7. Define response file location
  8. Start the test by selecting “Start” from “Run” menu

NOTE: listener “Save Responses to a file” can be disabled by right clicking on the item and selecting “Disable”. For my test, I enabled this with thread and user count 1, ran the test and verified the response. Once everything is working as expected, I disabled it and generated load by increasing the thread and loop count.