Monday, 10 November 2014

Remote Debugging of Jenkins "Maven Project" Jobs

To remote debug a Jenkins "Maven Project" job one has to add

 -Dmaven.surefire.debug="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE"   

to the "Goals" line of the Maven Build step. Also see

Note: Do not add it to the JVM-Options line of the Maven Build step. If you do then Eclipse will be able to attach the the remote JVM but it will not stop on a breakpoint.

Friday, 31 October 2014

ModSecurity Rule Execution Order and ctl:ruleRemoveById

In ModSecurity rules are executed in the order in which they are "physically" included into Apache's httpd.config file. First all the rules for phase 1, then all the rules for phase 2 and so on.

The documentation for ctl:ruleRemoveById states that "since this action is triggered at run time, it should be specified before the rule which it is disabling"

Before in this case means that the rule containing ctl:ruleRemoveById needs to run before the rule to be removed.

This means that if the rule to be removed  runs in phase 1 then the rule removing this rule needs to be "physically" included before the rule to be removed.

But if the rule to be removed runs in phase 2 then the rule removing this rule can be "physically" included after the rule to be removed as long as it runs in phase 1.

Wednesday, 1 October 2014

Spring MVC: Setting 'alwaysUseFullPath ' on 'RequestMappingHandlerMapping' when using 'mvc:annotation-driven'

It seems that the recommended way to set 'alwaysUseFullPath ' on 'RequestMappingHandlerMapping' when using <mvc:annotation-driven /> is to use a 'BeanPostProcessor':

 public class MyBeanPostProcessor implements BeanPostProcessor {  
   private static final Logger logger = LoggerFactory.getLogger(MyBeanPostProcessor.class);  

   public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {  
     if (bean instanceof RequestMappingHandlerMapping) {  
       setAlwaysUseFullPath((RequestMappingHandlerMapping) bean, beanName);  
     return bean;  

   private void setAlwaysUseFullPath(RequestMappingHandlerMapping requestMappingHandlerMapping, String beanName) {"Setting 'AlwaysUseFullPath' on 'RequestMappingHandlerMapping'-bean to true. Bean name: {}", beanName);  

   public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {  
     return bean;  


Monday, 22 September 2014

PostgreSQL Explain

Some info about PostgreSQL Explain - mainly for my own reference:

1) Include information on buffer usage:

 explain ( ANALYZE, BUFFERS ) select ...  

Tells us how many blocks are read from disc / from the postgres cache.

2) Display shared_buffers size:

 SELECT current_setting('shared_buffers') AS shared_buffers  

Also see: Memory - shared_buffers

3)  Influence the query plans chosen by the query optimizer:

 SET enable_seqscan TO off;  
 SET enable_seqscan TO on;  

For more options see: Planner Method Configuration

4) Drop Linux page cache and PostgreSQL cache (by restarting PostgreSQL) 

 $ /etc/init.d/postgresql stop  
 $ sync  
 $ echo 3 > /proc/sys/vm/drop_caches  
 $ /etc/init.d/postgresql start  

 5) Update the table's statistics:

 ANALYZE [tablename]  

Stores information in "pg_statistic". Use view "pg_stats" to look at the data

6) Specify an operator class to support like queries on data stored in UTF-8:

 CREATE INDEX ON foo(myColumn text_pattern_ops);
 EXPLAIN SELECT * FROM foo WHERE myColumn LIKE 'abcd%'; 

When you use a encoding other than C, you need to specifiy the operator class (varchar_pattern_ops, text_pattern_ops, etc.) while creating the index.


For more information see: Understanding EXPLAIN

Monday, 18 August 2014

Mobile Phone Standards


9.6 kbit/s
G / o (iOS)
53 kbit/s
(35-171 kbit/s)
220 kbit/s
(120-384 kbit/s)
384 kbit/s
(384 kbit/s to 2 Mbit/s)
H / 3,5G /
3G+ /
3G (iOS)
7,2 Mbit/s
(600 kbit/s to 10 Mbit/s)
H+ / 3G (iOS)
14,4 Mbit/s
(-42 Mbit/s)
LTE / 4G (Android)
100 MBit/s
(-300 Mbit/s)
1 GBit/s

* Speed depends on signal strength, frequencies used, congestion, etc.

Saturday, 26 July 2014

Unicode and Encodings

Here is a summary of all things Unicode:

  • Unicode maps 32-bit (4 byte) integers (code points) to characters
  • The first 127 code points (hex values 00 to 7f) are the same as ASCII 
  • The next 128 code points (0×80-0xff) are the same as ISO-8859-1
  • An encoding is a mapping from bytes to Unicode code points 
Character Reference and Code Tables
  • A plane is a continuous group of 65,536 (= 2^16) code points 
  • There are 17 planes, identified by the numbers 0 to 16 
  • The Basic Multilingual Plane (BMP) is plane 0 (0000–​FFFF)
  • Planes 1–16, are called “supplementary planes” 
  • The code points in each plane have the hexadecimal values xx0000 to xxFFFF, where xx is a hex value from 00 to 10, signifying the plane to which the values belong
UTF-8 Encoding
  • Encodes code-points as one or two 16-bit code units
  • The code-points defined by the BMP are encoded as single 16-bit code units that are numerically equal to the corresponding code points
  • Code points from the Supplementary Planes are encoded by pairs of 16-bit code units called surrogate pairs:
  • Uses exactly 32 bits per Unicode code point.
  • The UTF-32 form of a character is a direct representation of its codepoint
  • Example: 00 00 00 61 is UTF-32 for Unicode code point 61, which is 'a' 
Byte Order Mark (BOM)
  • U+FEFF
  • If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the non-character value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values
  • In UTF-16, a BOM (U+FEFF) may be placed as the first character of a file or character stream
  • The UTF-8 representation of the BOM is the byte sequence 0xEF,0xBB,0xBF
  • The Unicode Standard neither requires nor recommends the use of the BOM for UTF-8 
  • HTML Entity: &#0229; (decimal) or &#x00e5; (hex) (= å)
URL Unicode Encoding
  • UTF-16: %uXXXX, e.g. %u00e9 -> é
  • UTF-8: %XX[%XX][%XX][%XX], e.g. %c2%a9 -> © %e2%89%a0 -> ≠
Compiled from:

Friday, 25 July 2014

Video & Audio Containers & Codecs

"You may think of video files as “AVI files” or “MP4 files.” In reality, “AVI” and “MP4? are just container formats. Just like a ZIP file can contain any sort of file within it, video container formats only define how to store things within them, not what kinds of data are stored. (It’s a little more complicated than that, because not all video streams are compatible with all container formats, but never mind that for now.)

A video file usually contains multiple tracks — a video track (without audio), plus one or more audio tracks (without video). Tracks are usually interrelated. An audio track contains markers within it to help synchronize the audio with the video. Individual tracks can have metadata, such as the aspect ratio of a video track, or the language of an audio track. Containers can also have metadata, such as the title of the video itself, cover art for the video, episode numbers (for television shows), and so on." from

Container​ ​Extension Common Video Codec​ Common Audio Codec​ Alfresco registered MimeType Comment​
​MPEG4 ​.mp4
​H.264 ​AAC ​.mp4: video/mp4
.m4v: video/x-m4v
Developed by ISO.
​The MPEG 4 container is based on Apple’s older QuickTime container (.mov).
Can also be used to store other data such as subtitles and still images.
MP4 files can contain metadata as defined by the format standard, and in addition, can contain Extensible Metadata Platform (XMP) metadata.
More recent versions of Flash also support the MPEG 4 container.
​WEBM ​.webm ​VP8 ​Vorbis ​video/webm Audio-video format designed to provide a royalty-free, open video compression format for use with HTML5 video. Development is sponsored by Google.
Based on Matroska Media Container
Adobe has also announced that a future version of Flash will support WebM video.
​OGG .ogv ​Theora (=Ogg Video) ​Vorbis (=Ogg Audio) video/ogg ​Ogg is an open standard, open source–friendly, and unencumbered by any known patents
Ogg is a free, open container format maintained by the Xiph.Org Foundation.
The Ogg container format can multiplex a number of independent streams for audio, video, text (such as subtitles), and metadata.
​Flash Video ​.flv ​​H.264
VP6Sorenson Spark
​video/x-flv ​Developed by Adobe Systems
Prior to Flash (a.k.a. Flash Player 9 Update 3), this was the only container format that Flash supported
Audio Video Interleave​ ​.avi ​MPEG-4 part 2 ​MP3 ​video/x-msvideo ​The AVI container format was invented by Microsoft in a simpler time.
It does not even officially support most of the modern video and audio codecs in use today.
​Matroska .mkv ​H.264 ​Vorbis ​The Matroska Multimedia Container is an open standard free container format, a file format that can hold an unlimited number of video, audio, picture or subtitle tracks in one file
RealMedia​ .rm ​RealVideo ​RealAudio ​RealMedia is a proprietary multimedia container format created by RealNetworks. It is used for streaming content over the Internet.
​3GP .3gp ​​H.264
​It is used on 3G mobile phones but can also be played on some 2G and 4G phones.
3G2 ​H.264
​video/x-3gpp2 ​It is very similar to the 3GP file format, but has some extensions and limitations in comparison to 3GP.
​QuickTime ​.mov
​H.264 ​AAC ​video/quicktime ​Apple Inc.
Multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (e.g. for subtitles)
​Advanced Systems Format ​.asf
​Windows Media Video ​Windows Media Audio ​video/x-ms-asf
​Microsoft's proprietary digital audio/digital video container format, especially meant for streaming media.
Files containing only WMA audio can be named using a .WMA extension, and files of audio and video content may have the extension .WMV.
Both may use the .ASF extension if desired.

​Container Extension​ Common Audio Codec​ Alfresco registered Mime Type​ Comment​
OGG​ .oga
Vorbis (=Ogg Audio) ​audio/ogg ​Lossy audio compression
Xiph.Org Foundation recommends that .ogg only be used for Ogg Vorbis audio files.
​MP3 ​.mp3 ​MP3 ​audio/x-mpeg ​Lossy audio compression
An MP3 file that is created using the setting of 128 kbit/s will result in a file that is about 1/11 the size than the CD file
created from the original audio source.
Several bit rates are specified in the MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s,
and the available sampling frequencies are 32, 44.1 and 48 kHz.
Additional extensions were defined in MPEG-2 Audio Layer III: bit rates 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160 kbit/s
and sampling frequencies 16, 22.05 and 24 kHz
A sample rate of 44.1 kHz is almost always used, because this is also used for CD audio, the main source used for creating MP3 files.
Most MP3 files today contain ID3 metadata
MP3 format allows for variable bitrate encoding, which means that some parts of the encoded stream are compressed more than others
​WAV ​.wav ​PCM ​audio/x-wav ​Microsoft & IBM
Advanced Audio Coding​ .m4a
​AAC ​audio/aac ​​Lossy audio compression.
AAC generally achieves better sound quality than MP3 at similar bit rates.
AAC is also the default or standard audio format for iPhone, iPod, iPad, Nintendo DSi, iTunes and PlayStation 3.
​​​Matroska ​.mka ​Vorbis
Advanced Systems Format ​.wma ​MP3 ​audio/x-ms-wma An audio data compression technology developed by Microsoft.
The name can be used to refer to its audio file format or its audio codecs.


Generelle Syntax: ffmpeg [global options] [[infile options][‘-i’ infile]]... {[outfile options] outfile}...

Allgemeine Optionen

​Option ​Beschreibung ​Beispiel
​-i ​Bestimmt die Quelldatei (Input-File) und listet Informationen (Metadaten, Bitrate, Codierung, etc. ) über die Datei auf ​ffmpeg -i lala.mp3
-codecs​ ​Listet alle verfügbaren Codecs auf ffmpeg -codecs
-formats​ ​Listet alle verfügbaren Formate auf ffmpeg -formats

Wichtige Audio-Optionen

​Option Beschreibung​ ​Beispiel
​-acodec ​Der Audio-Codec mit dem die Zieldatei codiert werden soll, z.B. libvorbis, libmp3lame

Um den Codec der Quelldatei beizubehalten kann man den speziellen Wert 'copy' verwenden - es findet also keine Transcodierung statt: -acodec copy
​ffmpeg -i lala.mp3 -acodec libvorbis lala.ogg
-ab​ ​Die Bitrate mit der die Zieldatei kodiert wird. Eine geringere Bitrate veringert die Dateigröße aber auch die Qualität.

Es macht keine Sinn eine höhere Bitrate für die Zieldatei zu definieren als die Quelldatei hat.
​ffmpeg -i zzz.mp3 -ab 64k  zzz2.mp3
​-aq ​Die Audioqualität; für codecs mit variabler Bitrate
-ar​ ​Die Sampling Frequency in Hertz.

Es macht keine Sinn eine höhere Frequenz für die Zieldatei zu definieren als die Quelldatei hat.
​ffmpeg -i zzz.mp3 -ar 22050 -ab 96k zzz2.mp3
-ss ​When used as an input option (before -i), seeks in this input file to position. When used as an output option (before an output filename), decodes but discards input until the timestamps reach position. This is slower, but more accurate. Position may be either in seconds or in hh:mm:ss[.xxx] form. ffmpeg -ss 00:00:30.00 -t 25 -i bar.mp3 -acodec copy bar-new.mp3
​-t ​Stop writing the output after its duration reaches duration. duration may be a number in seconds, or in hh:mm:ss[.xxx] form.
​-ac ​Set the number of audio channels. For output streams it is set by default to the number of input audio channels. ​ffmpeg -i zzz.mp3 -ac 1 zzz2.mp3

Wichtige Video-Optionen

​Option ​Beschreibung ​Beispiel
-b​ ​Bitrate ​-b 2000k
-vcodec​ Der Video-Codec ​-vcodec mpeg4
​-vcodec copy
-s​ ​Bildgröße ​-s 320x240
-s xga
-aspect​ ​-aspect 4:3
​-target ​Vordefinierte targets (All the format options (bitrate, codecs, buffer sizes) are then set automatically) ​-target ntsc-dvd
-r​ ​Frame rate ​-r 10
​-f ​Container format ​-f avi
-ss​ When used as an input option (before -i), seeks in this input file to position. When used as an output option (before an output filename), decodes but discards input until the timestamps reach position. This is slower, but more accurate. Position may be either in seconds or in hh:mm:ss[.xxx] form. ​Extract image: Einen Frame bei Sekunde fünf über eine Sekunde (bei einer Famerate von einem Frame pro Sekunde) mit einer Größe von 320x240 extrahieren
-r 1 -t 1 -ss 5 -s 320x240
​-t Stop writing the output after its duration reaches duration. duration may be a number in seconds, or in hh:mm:ss[.xxx] form.

A FFmpeg Tutorial For Beginners
Using ffmpeg to manipulate audio and video files
FFmpeg – the swiss army knife of Internet Streaming

Tuesday, 8 July 2014

Remove a single entry from a MyBatis cache programatically

MyBatis automatically flushes its cache on a insert/update/delete statement. But what if you need to flush a item from the cache because its database representation has been changed by a different application? Here is how to remove a single entry from a MyBatis cache programatically:

public Object removeCacheEntry(SqlSession sqlSession, String cacheId, String mappingName, Object parameterObject) {
    Object removedObject = null;
    Cache cache = getCache(sqlSession, cacheId);
    if (cache != null) {
        CacheKey cacheKey = getCacheKey(sqlSession, mappingName, parameterObject);
        if (cacheKey != null) {
            removedObject = cache.removeObject(cacheKey);
  "Remove from cache: {} by key {}", removedObject, cacheKey);
    return removedObject;

private Cache getCache(SqlSession sqlSession, String cacheId) {
    return sqlSession.getConfiguration().getCache(cacheId);

private CacheKey getCacheKey(SqlSession sqlSession, String mappingName, Object parameterObject) {
    Configuration configuration = sqlSession.getConfiguration();
    SimpleExecutor executor = new SimpleExecutor(configuration, null);
    MappedStatement mappedStatement = configuration.getMappedStatement(mappingName);
    BoundSql boundSql = mappedStatement.getBoundSql(parameterObject);
    return executor.createCacheKey(mappedStatement, parameterObject, RowBounds.DEFAULT, boundSql);

For example:

removeCacheEntry(qlSession, "com.test.MyEntityMapper", "com.test.MyEntityMapper.getById", 1);

Wednesday, 14 May 2014

Installing Ubuntu 14.04 as a guest on a Windows 8.1 VirtualBox host system

Here are three stumbling stones I came accross while installing Ubuntu 14.04 as a guest on a Windows 8.1 VirtualBox host system:

1) To be able to install Ubuntu 14.04 as 64-bit I needed to go to the BIOS settings and enable virtualization

2) After the installation the screen resolution was awfully small and could not be changed. I had to install the "Virtual Guest Additions": sudo apt-get install virtualbox-guest-dkms

3) To be able to access an automatically mounted shared drive you have to add your ubuntu user to the group "vboxsf". Then you will be able to see the shared drive 'sf_*' under '/media', e.g. '/media/sf_mySharedFolder'

Content Security Policy Filter


I created a configurable Content Security Policy Java Servlet Filter for setting the 'Content-Security-Policy' / 'Content-Security-Policy-Report-Only' header on a ServletResponse.

It can be found on GitHub:


Saturday, 12 April 2014

Java Garbage Collection

I re-read "Java Performance" by Charlie Hunt and here is some general information about garbage collection - mainly for my own reference:

The Java heap size is the size of the young generation and the old generation spaces (does not include the permanent generation) (p.111)

The app memory is the heap size + the perm generation + the threads stack size (p.277)

There are three aspects to garbage collection (p.262)
  • Throughput: How much time is spend in garbage collection vs. how much time is spend on executing application code
  • Latency/Responsiveness: How much pause time in executing application code does GC introduce, i.e. how is the response time affected by GC
  • Memory: The amount of memory used

GC Monitoring

Switch on GC monitoring using: -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<filename> -> Analyze using GCHisto (p.121)

To shows how long the applicaton runs and how long GC takes add: -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime (p.120)

The occupancy of the young generation after a minor GC is the survivor space occupancy (p.111) 

The live data size is the amount of memory of long-lived objects in old and perm generation. It is the size of the heap after a full GC - calculate an average of multiple full GCs (p.113, 268, 274)

The parallel garbage collector is using adative heap sizing, i.e. the HotStop VM initially uses explicit young generation sizing settings (-Xmn, -XX:NewSize, ...) and the automatically adjusts young generation spaces sizes from those settings. This can be disables using the -XX:-UseAdaptiveSizePolicy flag. (p.105 &

If you wanted to know what GC ergonomics was thinking, try adding -XX:+PrintAdaptiveSizePolicy or -XX:AdaptiveSizePolicyOutputInterval=1. The later will print out information every i-th GC about what the GC ergonomics to trying to do. (

Heap size starting points (p.276)
  • Set -Xms & -Xmx to 3 to 4 times the live data size of the old generation
  • Set -XX:PermSize and -XX:MaxPermSize to 1.2 to 1.5 times the live data size of the perm generation
  • The young generation should be 1 to 1.5 times the live data size of the old generation
  • The old generation should be 2 to 3 times the live data size of the old generation

If young GC is takes too long reduce young generation size - if it occures too frequently increase it (p.280)

A parallel (throughput) garbage collector overhead of near 1% is considered well tuned. With >3% overhead tuning may improve the applications performance. It should be less than 5%. The larger the heap the better the opporunity for lower gc overhead - but this will also increase the max. stop-the-world time. p. 122/314

If worst case full GC time is unacceptable switch to a concurrent GC (Concurrent Mark Sweep or G1) (p.287)

Thursday, 10 April 2014

Samaxes Minify Maven Plugin & Eclipse - Deployment to Tomcat

I started using the Samaxes Minify Maven Plugin to minify my JavaScript and CSS files.The usage is pretty easy and I ended up with the following config in my POM


This worked fine when I ran 'mvn clean install' from my command line.

But I am also using Eclipse (with the whole ecosystem of  m2e / WTP / m2e-wtp) and Tomcat from within Eclipse. And I wanted to be able to publish the generated resources "script-min.js" and "style-min.css" to Tomcat using Eclipse.

Here is what I did in the end:

Did you notice the line in the plugin configuration.


By default the Minify Maven Plugin generates the resource into the 'target' folder. By setting this parameter I let the plugin generate the resources into the same folder as the source js and css files.

Now one simply has to run 'mvn generate-resources' either from the command line or from within Eclipse and then do a "Refresh" (F5) on the project and afterwards call 'Publish' on the Tomcat from within Eclipse. If Tomcat is already running it will even auto-publish the change.

This is it and this is what I do.

Note 1:

As an alternative to running 'mvn generate-resources' one can also call "Clean" on the project and let m2e rebuild it. There is just one additional thing if one want to go with this alternative: By default m2e does not execute the "Minify Maven Plugin" so you have to tell it to do so in your pom.xml:

       <!-- We need to add this so that m2e (maven2eclipse) will execute the minify-maven-plugin's goal 'minify' when it builds the eclipse project -->
        <execute />

That's it.

Note 2: Within the 'm2e lifecyce mapping' configuration I also tried to set


which would generate "script-min.js" and "style-min.css" each time one made a change to any of the source js or css files. This worked nicely when I manually called 'Publish' but I encountered a problem when Tomat was running and trying to auto-publish the changes:

When I changed one of the JS/CSS files Eclipse would get into an endless loop of automatically republishing and building the project.

As I do not very often change the JS/CSS files in this project I decided it was best to go with the manual steps of running 'mvn generate-resources' (from Eclipse), refreshing by hitting F5, and then publishing.

Monday, 7 April 2014

Oracle HotSpot JVM Command Line Flags

Find information about all the Oracle HotSpot JVM Command Line Flags:

One can use -XX:+PrintCommandLineFlags to see the default command line flags. For example

C:\>java -XX:+PrintCommandLineFlags -version

-XX:InitialHeapSize=263467392 -XX:MaxHeapSize=4215478272 -XX:+PrintCommandLineFlags 
-XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation 
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)

tells us that Java 8 uses the Parallell Garbage Collector as a default (-XX:+UseParallelGC) garbage collector, that the default min heap size is 1/64 (263467392 bytes - about 250MB) of my machines RAM (16GB) and the default max heap size is 25% (4215478272 bytes - about 4 GB), of my machines RAM.

One can also find out the value of a specific command line flag using the tool jinfo. For example

C:\>jinfo -flag MaxHeapSize 3216


tells us again the value of the max heap size. 3216 is the process id of the Java process you are interested in. You can find out the process ids of locally running Java processes using the tool jps


3216 Bootstrap
5872 Jps
2944 org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar

Tuesday, 18 March 2014

Setting HTTP Cache-Control header on Spring MVC contoller methods via annotations

I started using Spring MVC in a project and was really surprised to find that I could not specify HTTP-Cache-Control settings on my controller methods via annotations.

There are more people who miss this feature in Spring MVC and there are currently two issues open regarding this:


I was just about to implement this functionality myself when I found that thankfully Scott Rossillo already had:

Using "spring-mvc-cache-control" one simply registers a Spring MVC HandlerInterceptor in the Spring Dispatcher context file:

  <bean class="net.rossillo.spring.web.mvc.CacheControlHandlerInterceptor" />

Then one can happily annotate ones controller methods:

public final class MyTestController {

    @CacheControl(maxAge=300, policy = { CachePolicy.PUBLIC})
    public void test() {         
        System.out.println("In test method");   
    @CacheControl(policy = { CachePolicy.NO_CACHE})
    public void test2() {         
        System.out.println("In test2 method");   

That's basically it - but as a bonus here are a few interesting things about caching:

1) When a user reloads the current page (e.g. hits F5) the browser will send a conditional request (using a If-Modified-Since or If-None-Match header in the request when a Last-Modified or ETag header was specified in the response) to validate the cached page has not changed. Only when a user gets to a page via a link will the browser not send a conditional request but directly display the cached page.

2) If there is not Cache-Control header but a Last-Modified header then Firefox calculates an expiration value as specified in the HTTP 1.1:
Also, if the response does have a Last-Modified time, the heuristic
expiration value SHOULD be no more than some fraction of the interval
since that time. A typical setting of this fraction might be 10%.
3) Imagine the scenario where you send your content gzip encoded to the browser and this gets cached by a proxy server. Now a different browser requests the same page from the proxy but does not support gzip encoded content. Dilemma. The solution is that one can use the Vary header to tell a proxy to cache different versions of your page depending on one or more header values specified by the browsers, e.g. Vary=Accept-Encoding

Wednesday, 29 January 2014

SonarQube analysis with Maven for a JavaScript project with multiple source directories

Here is how I convinced SonarQube to analyze a Maven JavaScript project with multiple source directories.

As SonarQube by default is using the standard Maven source directory (src/main/java) (and also ignoring the property "sonar.sources" when running from Maven) I had to specify the source directory of my webapp and then use the property "sonar.inclusions" to specify in which subdirectories of my webapp directory my JavaScript source files reside. Here is an excerpt from my POM: