Using Boost Logging

Simple std::cout works great but there are few problems:

  1. Logging output only goes to console
  2. If program runs for a long time via cron / scheduler then output has to be redirected to a file
  3. Doing so might cause a large log with filesystem lock for which the program has to be stopped before it can be cleaned

Here’s how to use the Boost Logging library:

#include 

#include "boost/log/trivial.hpp"
#include "boost/log/utility/setup.hpp"

using namespace std;

int main() {
  // Output message to console
  boost::log::add_console_log(
    cout, 
    boost::log::keywords::format = "[%TimeStamp%]: %Message%",
    boost::log::keywords::auto_flush = true
  );

  // Output message to file, rotates when file reached 1mb or at midnight every day. Each log file
  // is capped at 1mb and total is 20mb
  boost::log::add_file_log (
    boost::log::keywords::file_name = "MyApp_%3N.log",
    boost::log::keywords::rotation_size = 1 * 1024 * 1024,
    boost::log::keywords::max_size = 20 * 1024 * 1024,
    boost::log::keywords::time_based_rotation = boost::log::sinks::file::rotation_at_time_point(0, 0, 0),
    boost::log::keywords::format = "[%TimeStamp%]: %Message%",
    boost::log::keywords::auto_flush = true
  );

  boost::log::add_common_attributes();

  // Only output message with INFO or higher severity
  boost::log::core::get()->set_filter(
    boost::log::trivial::severity >= boost::log::trivial::info
  );

  // Output some simple log message
  BOOST_LOG_TRIVIAL(trace) << "A trace severity message";
  BOOST_LOG_TRIVIAL(debug) << "A debug severity message";
  BOOST_LOG_TRIVIAL(info) << "An informational severity message";
  BOOST_LOG_TRIVIAL(warning) << "A warning severity message";
  BOOST_LOG_TRIVIAL(error) << "An error severity message";
  BOOST_LOG_TRIVIAL(fatal) << "A fatal severity message";
}

See Also

Using Boost On Visual Studio Project

To use Boost libraries, set following configurations on Visual Studio project properties:

  1. Check if boost is already installed (eg: C:\Program Files (x86)\boost\boost_1_51_0) if not, go through the installation process on http://www.boost.org/doc/libs/1_52_0/more/getting_started/windows.html
  2. Ensure BOOST_HOME environment variable exist and points to the installation path above
  3. On project properties, under C/C++ -> General, add $(BOOST_HOME) to Additional Include Directories
  4. Under Linker -> General, add $(BOOST_HOME)\lib to Additional Library Directories
  5. Ensure C/C++ -> Code Generations -> Runtime Library is set to /MD or /MDd so linker can find boost lib files

Some boost component such as boost log need to be built first before it can be linked:

  1. Unarchive the downloaded compressed file
  2. Open command prompt in administrator mode and cd into the unarchived directory. Run bootstrap.bat to build b2
  3. Run b2 install --prefix=PREFIX --toolset=msvc-10.0 --build-type=complete stage. This will take about 30 minutes, be patient. Note: “PREFIX” is the directory you want to install boost (eg: lib). “toolset=msvc-10.0″ means compile boost by using visual studio 2010.

Java EE Default Error Page

The default error page for java webapp can be set via web.xml


  ...
  
    /error
  

Several error pages can be defined with each assigned to specific error code

  
    404
    /notFoundError
  
  
    403
    /forbiddenError
  

If using Spring MVC, the error message and status code can be obtained on the handler using following request attributes:

("/error")
public String error(HttpServletRequest req) {
  Object message = req.getAttribute("javax.servlet.error.message");
  Object statusCode = req.getAttribute("javax.servlet.error.status_code");
  Object requestURI = req.getAttribute("javax.servlet.error.request_uri");
  // ...
}

Running Non-duplicate Task On A Cluster Using Hazelcast

Having a container cluster is great, but one problem is when you need to run something once only (think table cleanup, pulling data to shared db, etc).

Since you have multiple VM running off the same source code, it will normally run duplicate instance of the task — one per VM. You can avoid that from happening using distributed lock.

 private HazelcatInstance hz;

ILock lock = hz.getLock(MyTask.class.getName());

Every node in the cluster has access to distributed lock. Only one node can obtain the lock. If a node successfully obtained the lock its thread will continue executing, otherwise it will block until it manages to obtain one (or times out).

logger.info("Trying to run task on this node...");
lock.lock(); // thread execution will only pass this line if no other node has the lock

logger.info("Running task on this node..");
// do something here

Since calling lock.lock() might block the thread, you typically want to run it on a separate thread.

If a node that has the lock crashes / dies, the lock will be released, and other nodes can pick it up and become the ‘task owner’.

See Also

Obtaining Maven Version Programatically On Runtime

This is my favorite way of reading Maven project version from runtime.

First on your pom, declare that all files on src/main/resources will be filtered — meaning any placeholder (eg: ${project.version}) will be substituted by Maven.


  ...
  
    
      src/main/resources
      true
    
  

Then create a properties file (eg: myapp.properties) with key-value pair containing the Maven project version

myapp.version=${project.version}

Ensure the file above is configured on your Spring container either using xml:


Or Java annotation:


("classpath:/myapp.properties")
public class TheConfig {
  ...
}

The key-value entry can then be injected on any Java beans living in the container:


public class MyClass {
  ("${myapp.version}") private String version;
  ...
}

Or even spring xml config:


  

Beware that if you have more than 1 Spring container (eg: root and servlet), you need to declare on both, otherwise the injection will fail.

Configuring NGINX Load Balancer Reverse Proxy

Below is example of NGINX reverse proxy with 2 backend load balanced:

upstream backend {
  ip_hash;
  server localhost:8080 fail_timeout=3;
  server localhost:8081 fail_timeout=3;
}

server {
  listen 80;
  server_name mydomain.com;

  location / {
    proxy_pass http://backend/;
    proxy_redirect default;
    proxy_cookie_domain localhost mydomain.com;
  }
}
  • The upstream directive defines a cluster named backend of 2 backend servers (localhost:8080 and localhost:8081). fail_timeout parameter specifies request to the node will be deemed fail if no response is obtained after 3 seconds.
  • The ip_hash directive causes request coming from same ip to be associated with the same backend. This is often called sticky session. Other popular strategy is using cookie.
  • The cluster name backend is referenced by proxy_pass directive inside location

Add down parameter to avoid request being passed to specific backend:

upstream backend {
  ip_hash;
  server localhost:8080 fail_timeout=3;
  server localhost:8081 fail_timeout=3 down;
}

This is handy when performing no-outage release.

Don’t forget to reload the configuration using nginx -s reload

Read Related NGINX Docos

Creating Self Executing Tomcat Jar

The tomcat maven plugin comes with handy exec-war-only goal that will bundle a standalone tomcat server in an executable jar. Add following configuration to your pom file:


  org.apache.tomcat.maven
  tomcat7-maven-plugin
  2.0
  
    
      tomcat-run
      
        exec-war-only
      
      package
      
        /
      
    
  

And when you run the package goal, another artifact called myapp-x.y-war-exec.jar will be created. To run this simply execute java -jar myapp-x.y-war-exec.jar on a terminal shell.

The jar also comes with several options of which you can view by giving –help flag. The -httpPort is often useful to setup few testing environments:

C:\myapp>java -jar myapp-1.0-war-exec.jar --help
usage: java -jar [path to your exec war jar]
 -ajpPort                      ajp port to use
 -clientAuth                            enable client authentication for
                                        https
 -D                                key=value
 -extractDirectory    path to extract war content,
                                        default value: .extract
 -h,--help                              help
 -httpPort                    http port to use
 -httpProtocol            http protocol to use: HTTP/1.1 or
                                        org.apache.coyote.http11.Http11Nio
                                        Protocol
 -httpsPort                  https port to use
 -keyAlias                    alias from keystore for ssl
 -loggerName                logger to use: slf4j to use slf4j
                                        bridge on top of jul
 -obfuscate                   obfuscate the password and exit
 -resetExtract                          clean previous extract directory
 -serverXmlPath          server.xml to use, optional
 -uriEncoding              connector uriEncoding default
                                        ISO-8859-1
 -X,--debug                             debug

Java Web Session Cluster Replication With Hazelcast

Source: http://www.hazelcast.com/use-cases/web-session-clustering/

This is a great way to ensure that session information is maintained when you are clustering web servers. You can also use a similar pattern for managing user identities. Learn how easy it is to maintain session state across a set of servers here!

Say you have more than one web servers (A, B, C) with a load balancer in front of them. If server A goes down then your users on that server will be directed to one of the live servers (B or C) but their sessions will be lost! So we have to have all these sessions backed up somewhere if we don’t want to lose the sessions upon server crashes. Hazelcast WM allows you to cluster user http sessions automatically. Followings are required for enabling Hazelcast Session Clustering:

  • Target application or web server should support Java 1.5+
  • Target application or web server should support Servlet 2.4+ spec
  • Session objects that needs to be clustered have to be Serializable

Here are the steps to setup Hazelcast Session Clustering:

Put the hazelcast and hazelcast-wm jars in your WEB-INF/lib directory.

Put the following xml into web.xml file. Make sure Hazelcast filter is placed before all the other filters if any; put it at the top for example.


    hazelcast-filter
    com.hazelcast.web.WebFilter
    
    
        map-name
        my-sessions
    
    
    
        sticky-session
        true
    
    
    
        debug
        true
    


    hazelcast-filter
    /*
    FORWARD
    INCLUDE
    REQUEST


    com.hazelcast.web.SessionListener

Package and deploy your war file as you would normally do.

It is that easy! All http requests will go through Hazelcast WebFilter and it will put the session objects into Hazelcast distributed map if needed.

Clustering Tomcat Using Static Membership

Source: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009794

Purpose

This article gives an example of cluster configuration using static cluster membership (instead of determining it dynamically over multicast), and points out some important aspects of membership configuration. While multicast membership is simpler to set up, static membership is necessary on networks with multicast disabled.

Resolution

Example: This is an example element in server.xml from a node that is part of a 2-node static cluster. In this example, all nodes are on the same host. You can copy this example and modify hosts and ports as necessary for your own cluster.


  

  

    

    
      
    
    
    
    

    
          
          
          
    

  
  
  

  
  

This example differs from the default multicast configuration as discussed in the Apache Tomcat 6 Clustering how-to. These differences are important when creating your own non-multicast configuration:

  • The McastService element is removed.
  • The channelStartOptions=”3″ switch has been added to the Cluster element, to disable the multicast service. Even when not explicitly configured, the multicast service is enabled by default. If the multicast service is not disabled this way, and multicast is enabled on the network, your nodes could cluster with unexpected members. For more information about the Cluster element, see its entry in the Apache documentation.
  • The TcpPingInterceptor class is added. This interceptor pings other nodes so that all nodes can recognize when other nodes have left the cluster. Without this class, the cluster may appear to work fine, but session replication can break down when nodes are removed and re-introduced. For more information about the TcpPingInterceptor element, see its entry in the Apache API documentation.
  • The StaticMembershipInterceptor element is added at the end of the list of interceptors, specifying the other static members of the cluster. For more information about the StaticMembershipInterceptor element, see its entry in the Apache documentation.

Remember these points when modifying the configuration:

  • As with the default configuration, you can use either the DeltaManager or BackupManager by changing the element.
    The order of the interceptors is very important. Preserve the order presented here.

    Caution: Reversing the first two interceptors can results in a log full of messages similar to:

    WARNING: Unable to send ping from TCP ping thread.
    java.lang.NullPointerException
            at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.sendPing(TcpPingInterceptor.java:121)
            at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor$PingThread.run(TcpPingInterceptor.java:166)
    
  • One member in the static list is commented out: the member that corresponds to the node on which this configuration file is located. Be sure not to include a node in its own cluster membership. If this were done, the node would sync to itself as if it were a different node in the cluster. With DeltaManager, that could lead to errors at startup like:
    org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
    SEVERE: Manager [localhost#/petcare]: No session state send at [time] received, timing out after 60,087 ms.
    org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
    WARNING: Manager [localhost#/petcare]: Drop message SESSION-GET-ALL inside GET_ALL_SESSIONS sync phase start date [times]
    

    With the BackupManager this configuration is silently accepted, but the node would consider itself its own backup. Restarting the node would result in the loss of sessions not synced to other nodes.

Additional Information

This configuration is known to work with Tomcat 6.0.33. It does not work with Tomcat 7.0.22/23. For more information, see the Apache bug report.

Transferring File Using FTPS in Java

Here’s a sample code on setting up FTPS file transfer in Java. Make sure you have setup SSLContext trusting the FTPS server’s certificate.

SSLContext sslContext = /* setup SSLContext */
FTPSClient ftps = new FTPSClient(true, sslContext);
ftps.connect(hostname, port);

// Timeout exception will be raised if no response received after 20s
ftps.setDataTimeout(20000);

// Authenticate
ftps.user("ftp_user")
ret = ftps.pass("ftp_pass");

// Define protection buffer size and protocol. Following are the default for implicit FTP (FTPS)
ftps.parsePBSZ(0);
ftps.execPROT("P");

// Set passive mode and file transfer type
ftps.type(FTP.BINARY_FILE_TYPE);
ftps.enterLocalPassiveMode();

// Remote path where file will be downloaded from
ftps.changeWorkingDirectory("/remote/path");

// Retrieve a file called "file.txt" from remote server
FileOutputStream local = new FileOutputStream("file.txt");
ftps.retrieveFile("file.txt", local);

This code uses commons-net package, make sure it is included in maven dependencies:


  commons-net
  commons-net
  3.3

The actual FTPS code will vary greatly depending on your FTPS server setup. The above assumes FTPS server is running in passive mode with normal username / password authentication.