Categories
development

Building Apache Ofbiz Docker Images

Apache Ofbiz (https://ofbiz.apache.org/) is an open source suite of business applications that companies can use to manage customer relationships, order processing, warehouse management, HR and lots of other functions.

This post covers how to build Ofbiz as a docker image so it can be deployed as a docker container for testing.

Pre-requisites

To retrieve, build and run Ofbiz docker images you will need the following installed on your system:

  • Docker
  • Git
  • Java
  • JAVA_HOME environment set.

If running on Windows I would suggest installing the following:

  • Git for windows. This will give you the git-bash terminal.

Get the Sources

We have a few sets of sources to download:

  • The docker-ofbiz project
  • Ofbiz-framework
  • Any ofbiz plugins (optional)

At a shell prompt retrieve the docker-ofbiz sources using:

git clone https://github.com/danwatford/ofbiz-docker

Change to the new ofbiz-docker directory and then retrieve the ofbiz sources:

cd ofbiz-docker

git clone https://github.com/apache/ofbiz-framework --branch trunk

Optional Step

Next retrieve any plugin sources you want to include in the build. These must be downloaded to the ofbiz-framework/plugins directory using commands similar to:

mkdir ofbiz-framework/plugins

git clone https://github.com/danwatford/ofbizGridTest ofbiz-framework/plugins/gridTest

Build the Docker Image

Execute the gradle build to create the docker image.

At a unix of unix-like prompt (e.g. git-bash) run:

./gradlew buildOfbizImage

If at a windows command-prompt run:

gradlew buildOfbizImage

The gradle build will generate a Dockerfile and then execute the docker build. Once complete we can see the new image listed by running:

docker images

On my host I get

$ docker images
REPOSITORY  TAG      IMAGE ID        CREATED         SIZE
ofbiz       latest   42eb0f47adae    48 minutes ago  1.14GB

Run the Docker Container

To create and run a new docker container from the built image execute:

docker-compose up

The above will run the ofbiz container, loading data from the local-config directory. This allows setting of environment-specific administrator passwords.

Once the image is running visit https://localhost:8443/partymgr in your web browser and login with username admin and password ofbiz.

Depending on your docker setup, you may need to substitute localhost for your docker-machine’s IP address. You may also need to accept the browser’s warning that the connection is not secure.

Stop the Docker Container

At the terminal where docker-compose was launched press Ctrl-C to interrupt and stop the running container.

Alternatively you can open another shell, change to the docker-ofbiz directory and execute:

docker-compose down

Sources

The sources for docker-ofbiz can be found at https://github.com/danwatford/ofbiz-docker.

Categories
case-study

Case Study: Clinco

Clinco are specialists in the ordering and analysis of medical records, producing documents in relation to cases of catastrophic personal injury and medical negligence.

For each case they handle Clinco need to produce a number of documents, each populated with various details relating to the client and/or the case subjects. This information was entered into the documents manually, a repetitive and time consuming task with a not insignificant risk of data-entry error.

Clinco were keen to have case documents automatically generated, pre-populated with the standard case-related data, so they could focus on the content resulting from their analysis.

The Solution

After an initial high-level requirements discussion Watford Consulting prototyped two solutions for document generation – one cloud based, one on-premises.

The cloud solution, making use of Azure and Plumsail Documents would have been very quick to setup, but was not compatible with Clinco’s current Information Handling Model for security reasons. Passing sensitive client information to cloud providers was not something Clinco was prepared to do.

The on-premises solution was explored further, exercised with additional tooling to capture and store client and case records, and delivered as an application built on top of Microsoft Office.

The solution is now in use by multiple team members at Clinco.

At Clinco, we streamlined our workflow using a bespoke IT solution from Watford Consulting. They took the time to understand our business and then develop a technical system to remove inefficiencies and enhance the processes involved. We’re saving time, and reducing potential for error.

We have wanted something like this for ages but didn’t know where to start looking! The project was also much more affordable than we expected.

We thoroughly recommend Watford Consulting to any business looking to use technology to drive efficiency.
— Sarah Wallace, Clinco

Categories
development

Copying Context to Executor Service Threads

Find the sources on Github: https://github.com/danwatford/thread-context-copy

Thread-Specific Context

If building a non-reactive Java web service such as a REST service that acts as an interface to other upstream services it is common
to adopt a model of one-thread-per-request with no state held at the web service. This model can be deployed to servlet containers
and application servers with ease.

For any onward requests made to upstream services it is often necessary to populate the request with information from the incoming
request, possibly to identify the user to the upstream service, or to store a request-id/correlation-id which can be used to trace
processing through multiple systems.

This information is not normally passed from method to method through the web service code but is instead held in temporary storage
scoped to the original request. Since the one-thread-per-request model is used, this temporary storage normally ends up being
ThreadLocal variables. When requests are constructed for the upstream service information can be read from these ThreadLocal
variables.

Using Multiple Threads

Depending on the work to be done it may be necessary to submit multiple requests to an upstream service. By sticking with a single
thread these requests will be sent sequentially which may result in unacceptable performance for the client. We can
use an ExecutorService to execute multiple upstream requests concurrently, but the problem is ensuring any thread specific data
is in place on the worker threads that will perform the requests.

By using a new ExecutorService to manage the concurrent upstream requests we can make use of a ContextCopyingThreadFactory to
handle copying application specific context from the original thread to any new threads.

Code listing: ContextCopyingThreadFactory.java

package com.foomoo.threadutil;

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;

/**
 * A {@link java.util.concurrent.ThreadFactory} that copies thread specific ({@link java.lang.ThreadLocal} data between threads. This is useful to
 * copy context across threads for use in a {@link java.util.concurrent.ThreadPoolExecutor}.
 */
public class ContextCopyingThreadFactory implements ThreadFactory {

    private final ThreadFactory threadFactory = Executors.defaultThreadFactory();

    private final List<ContextCopier> contextCopiers;

    /**
     * Construct the {@link ContextCopyingThreadFactory} with the given {@link java.util.Collection} of {@link ContextCopier}s. If the
     * {@link Collection} is ordered, the contexts will be copied in the same order when applied to a new {@link Thread}.
     *
     * @param contextCopiers The {@link ContextCopier}s to apply to new Threads.
     */
    public ContextCopyingThreadFactory(final Collection<ContextCopier> contextCopiers) {
        this.contextCopiers = new ArrayList<>(contextCopiers);
        this.contextCopiers.forEach(ContextCopier::copy);
    }

    public Thread newThread(final Runnable r) {
        return threadFactory.newThread(makeRunnableContextCopying(r));
    }

    /**
     * Takes the given {@link Runnable} and wrap it to execute the registered context-copying operations before the {@link Runnable}'s own operations.
     *
     * @param r The {@link Runnable} to wrap.
     * @return The new {@link Runnable}.
     */
    private Runnable makeRunnableContextCopying(final Runnable r) {

        return () -> {
            contextCopiers.forEach(ContextCopier::apply);
            r.run();
        };
    }
}

The ContextCopyingThreadFactory depends on implementations of ContextCopier to perform the actual reading and writing of thread specific
data. The ContextCopyingThreadFactory ensures that ContextCopier#copy will be called on the constructor thread and that
ContextCopier#apply will be called on any new threads created by the ThreadFactory.

Implemenations of ContextCopier must ensure any transformations are applied to the copied data as appropriate for the application. For
example, if the thread specific data is some sort of cache the implementation may choose to reuse the cache across all threads or create
copies depending on whether the cache is considered thread-safe.

Code listing: ContextCopier.java

public interface ContextCopier {

    /**
     * Captures context from the current thread.
     */
    void copy();

    /**
     * Applies the captured context to the current thread.
     */
    void apply();
}

The ExecutorServiceExample (available here: https://github.com/danwatford/thread-context-copy/tree/master/threadcontextcopyexamples/src/main/java/com/foomoo/threadutils/example)
demonstrates use of the ContextCopyingThreadFactory with an XRequestIdContextCopier which copies values from/to the Log4J2 ThreadContext.

The value from each thread’s ThreadContext is included in any logging output.

Code listing: ExecutorServiceExample.java

package com.foomoo.threadutils.example;

import com.foomoo.threadutil.ContextCopier;
import com.foomoo.threadutil.ContextCopyingThreadFactory;
import com.google.common.collect.ImmutableList;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;

import java.util.Collections;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.IntStream;

public class ExecutorServiceExample {

    private static final String KEY = "requestId";

    /**
     * Example use case of the {@link ContextCopyingThreadFactory} where the RequestId for a server processed request is stored in thread-specific
     * storage and copied to other threads.
     * <p>
     * In this example the original request id is set and then the executor is created, making use of the {@link ContextCopyingThreadFactory} itself
     * configured with an instance of {@link XRequestIdContextCopier}. The {@link Callable}s submitted to the executor then log output to demonstrate
     * the RequestId values held in their thread specific storage.
     *
     * @throws InterruptedException Not expected to be thrown.
     */
    public static void main(final String args[]) throws InterruptedException {

        setRequestId("000");

        final Logger logger = LogManager.getLogger();
        logger.info("Example start");

        final ContextCopyingThreadFactory threadFactory = new ContextCopyingThreadFactory(ImmutableList.of(new XRequestIdContextCopier()));
        final ExecutorService executorService = Executors.newFixedThreadPool(5, threadFactory);

        IntStream.rangeClosed(1, 20)
                 .forEach(taskId -> executorService.submit(getRunnable(taskId, logger)));

        logger.info("All tasks submitted");

        executorService.shutdown();
        executorService.awaitTermination(1, TimeUnit.SECONDS);
    }

    private static Callable<Void> getRunnable(final int taskId, final Logger logger) {
        return () -> {
            final String padding = String.join("", Collections.nCopies(taskId, "  "));
            final String message = String.format("%s%02d", padding, taskId);
            logger.info(message);
            Thread.sleep(200);
            logger.info(message);
            return null;
        };
    }

    static void setRequestId(final String requestId) {
        ThreadContext.put(KEY, requestId);
    }

    static String getRequestId() {
        return ThreadContext.get(KEY);
    }

    /**
     * {@link com.foomoo.threadutil.ContextCopier} for interacting with the X Request Id thread-specific storage in order to copy X Request Id values to
     * new threads. Applies a suffix to the requestId for each thread created.
     */
    private static class XRequestIdContextCopier implements ContextCopier {

        private String requestId;
        private AtomicInteger applyCount = new AtomicInteger();

        @Override
        public void copy() {
            requestId = getRequestId();
        }

        @Override
        public void apply() {
            setRequestId(String.format("%s-%02d", requestId, applyCount.getAndIncrement()));
        }
    }
}

Code listing: log4j2-test.properties

appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = RequestId=%X{requestId} [%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n

rootLogger.level = info
rootLogger.appenderRef.stdout.ref = STDOUT
Categories
development

ABC Parser and Domain Libraries

Find the sources on Github: https://github.com/danwatford/abc

Parsing ABC Notation

ABC Notation (https://en.wikipedia.org/wiki/ABC_notation) is a way to encode music notation in simple text. There are many sources of folk/tradition tunes available in ABC Notation.

As part of a project to find shared sequences of notes in traditional tunes (see http://abc.foomoo.com) I built a parser to transform the content of ABC tune files into objects that could be readily processed.

This parser only converts ABC content into case classes that represent the various ABC notation elements used in the input. Trying to go directly from the ABC notation to tune objects would have introduced a lot of rules/complexity into the parser which is better handled in a separate processing phase.

Using the Parser

The parser is made available in module com.foomoo.abc:abc-parser is available on jcenter maven repository (https://jcenter.bintray.com/).

To use the parser wrap the content to be parsed in a Reader (e.g. CharSequenceReader) and call the relevant method for the type of object to be parsed on class AbcNotationParser.
For example, to parse the entire contents of an ABC notation file use something similar to:

  private def parseFileContent(fileContent: String): Try[AbcNotationFile] =
     AbcNotationParser.file(new CharSequenceReader(fileContent)) match {
       case AbcNotationParser.NoSuccess(msg, next) =>
         Failure(new IllegalArgumentException(msg + "\nNext is: " + next.pos))
       case AbcNotationParser.Success(ts, _) =>
         Success(ts)
     }

See the abc-app module in the sources on github for examples of how to call the parser.

Alongside the abs-parser module is the abc-domain module which provides the case classes for ABC notation elements and for ABC tune elements. This module also includes a processor for conversion of Abc notation objects into tune objects.

Implementation Notes

The parser is implemented using Scala Parser Combinators. Scala Parser Combinators are no longer part of the scala standard library the following dependency was added to the parser project’s build.sbt:
"org.scala-lang.modules" %% "scala-parser-combinators" % "1.0.4"

The parser extends the RegExParsers trait as it provides convenient ways to convert from Strings and regular expressions into Parser objects which operate on input.

Since whitespace is an important component of the ABC notation it is important to prevent RegexParsers from skipping over it by setting the skipWhitespace method to return false.

To enable parsing of notation elements in String literals where terminating newlines might be missing, the end of input parser was defined. This parser is tested in many of the same places that tests are made for line breaks in input.
This was important to simplify building of test strings in the test specs.

Categories
development

Game Of Life In Scala

Find the sources on GitHub: https://github.com/danwatford/gameoflife-scala.git

Game of Life in Scala is a simple GUI program making use of the scala.swing package. Scala.swing provides wrappers around the Java Swing component removing a lot of the boiler plate associated with common tasks.

This program’s UI consists of a single button and OnOffGrid, a panel based class which draws ‘cells’ according to whether they are On (Alive) or Off (Dead). OnOffGrid.scala is listed below.

In its constructor, OnOffGrid sets its preferred size, declares an interest in mouse clicks, then defines a handler for MouseClicked events.

Mouse clicks are transformed into CellClicked events and published to any components that are listening to the OnOffGrid. This facility allows the listener to alter any model that might be driving the OnOffGrid.

When the OnOffGrid is painted, the paintComponent method determines the On/Off state of each cell by calling the function assigned to onOff. The On/Off state of each cell is recalculated each time painting is required which may not be the most efficient approach. An alternative may be to have the OnOffGrid cache the state of each cell and only query the onOff function when the client invalidates the cache.

The main frame of the program is set up in:

The GofLUI object extends SimpleSwingApplication which provides the main method and sets up a Frame according to the top method. Top creates a MainFrame setting a title and assigning the contents into a BorderLayout. Interest is registered in clicks on the Button and clicks on the OnOffGrid’s Cells. Both of these events drive changes to the model and trigger the OnOffGrid to be redrawn.

Categories
development

Recording APRS data with Groovy

Post APRSParser – A Spring Example included a class, SocketAPRSDataSource, which when coupled with class DataSourceCapture would capture APRS data from and APRS-IS server to a file for playback in later development.

Listing 1 contains a quick Groovy script to do the same thing. It can probably be refined a bit should demonstrate how powerful groovy is for quick prototyping.

Listing 1

/* Open a socket to an APRS-IS server, log in as guest and filter to only receive
 * messages from call signs prefixed with G, M or 2.
 * Write all messages to a file with a time stamp.
 */

def writeLine(writer, line) {
    def date = new Date()
    lineOutput = date.format("yyyy-MM-dd'T'HH:mm:ss,", TimeZone.getTimeZone("UTC")) + line
    println(lineOutput)
    writer.println(lineOutput)
    writer.flush()
}

new File("APRSData.txt").withPrintWriter { writer ->
    // Using a UK server. See http://www.aprs2.net/serverstats.php for other servers.
    s = new Socket("uk.aprs2.net", 14580);
    s.withStreams { input, output ->
        // Wait for the software version line from the server before logging in.
        writeLine(writer, input.newReader().readLine())

        // Identifying software as UI-View32. Would rather not do this but using other
        // strings seems to prevent successful login.
        output << "user Guest pass -1 vers UI-View32 V2.03 filter p/G/M/2\n"

        // Record all lines until program killed or socket closed.
        input.newReader().eachLine { 
            writeLine(writer, it)
        }
    }
}

Categories
development

Trap 0

Files used in this post available at https://github.com/danwatford/trap0-tests.git

In build scripts for complex systems we sometimes need to perform operations that cause side-effects on the hosting system.

Take build systems that generate disk images as an example. Under Linux you can use tools like qemu-nbd to take a file and present it as a Network Block Device (NBD). After partitioning the NBD, generating filesystems on them, and then mounting filesystems we are left with a lot of baggage should the build now fail. This baggage may leak resources or prevent future builds from executing. Enter Trap Zero.

Listing 1 – simple_t0.lib – Simple Trap Zero Handler

in_t0() {
    echo Adding to trap 0: $@
    _T0="$@; $_T0"
    trap "set -x; $_T0" 0
}

launch_and_check() {
    "$@"
    result=$?
    echo "Result $result for: $@"
    [ $result -eq 0 ] || exit 1
}

Listing 1 shows a simple function (in_t0) to execute the given parameters as a single command in trap zero. The function can be called multiple times and the commands will be executed in the reverse order in trap zero to when they were added.

Listing 2 – test_t0_1.sh

#!/bin/sh
. ./simple_t0.lib

echo Echo 1
in_t0 echo Echo 2
echo Echo 3

To demonstrate the trap 0 functionality place the files from Listings 1 and 2 in the same directory. Executing test_t0_1.sh should give output similar to:

Echo 1
Adding to trap 0: echo Echo 2
Echo 3
+ echo Echo 2
Echo 2

“Echo 3” is printed before “Echo 2”. This is because the echo Echo 2 statement is only executed as part of the Trap 0 handler upon exit from the script.

This was a very simplistic and virtually useless example, and the implementation of the trap 0 handler can be improved on significantly too. A scenario where use of the trap 0 handler will be useful is when working with chroot. Depending on what you need to do inside the chroot you may need to bind the /proc, /dev and /sys directories into the chroot directory. Listing 3 shows how this may be done, utilising the trap 0 handler to cleanly remove the binds at exit from the script. This will handle the case where and error causes the script to exit.

Listing 3 – test_t0_2.sh – Using trap 0 for chroot work

#!/bin/bash
. ./simple_t0.lib
DIR=/tmp/foomoo
mkdir -p $DIR

# Mount proc, sys and dev in the target directory ready for a chroot.
launch_and_check mkdir $DIR/proc $DIR/sys $DIR/dev
in_t0 rmdir $DIR/proc $DIR/sys $DIR/dev

launch_and_check sudo mount --bind /proc $DIR/proc
in_t0 sudo umount $DIR/proc

# Simulate an error condition if $DIR/error_marker exists.
set -e
[ ! -e $DIR/error_marker ] || false

launch_and_check sudo mount --bind /sys $DIR/sys
in_t0 sudo umount $DIR/sys

launch_and_check sudo mount --bind /dev $DIR/dev
in_t0 sudo umount $DIR/dev

echo chroot would go here.

In Listing 3, lines 7, 10, 17 and 20 prepare the environment ready for the chroot, and immediately following each of these lines we add a command to undo their behaviour. For example, line 7 creates three directories and line 8 adds a command to delete these directories to the trap 0 handler.

Listing 4 – Output from running test_t0_2.sh

Result 0 for: mkdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev
Adding to trap 0: rmdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev
Result 0 for: sudo mount --bind /proc /tmp/foomoo/proc
Adding to trap 0: sudo umount /tmp/foomoo/proc
Result 0 for: sudo mount --bind /sys /tmp/foomoo/sys
Adding to trap 0: sudo umount /tmp/foomoo/sys
Result 0 for: sudo mount --bind /dev /tmp/foomoo/dev
Adding to trap 0: sudo umount /tmp/foomoo/dev
chroot would go here.
+ sudo umount /tmp/foomoo/dev
+ sudo umount /tmp/foomoo/sys
+ sudo umount /tmp/foomoo/proc
+ rmdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev

Listing 4 shows the output of running test_t2_2.sh. After the chroot has executed we see three unmount commands executed in reverse order compared to when added to trap 0, followed by the rmdir. The changes to the environment have been unwound, but this doesn’t hasn’t demonstrated anything different to what could have been achieved by simply writing the unmount statements after the chroot.

The power of this approach really comes from coping with errors during execution. Trying running test_t0_2.sh after creating file /tmp/foomoo/error_marker. Listing 5 shows the expected output.

Listing 5 – Output from running test_t0_2.sh with /tmp/foomoo/error_marker present

Result 0 for: mkdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev
Adding to trap 0: rmdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev
Result 0 for: sudo mount --bind /proc /tmp/foomoo/proc
Adding to trap 0: sudo umount /tmp/foomoo/proc
+ sudo umount /tmp/foomoo/proc
+ rmdir /tmp/foomoo/proc /tmp/foomoo/sys /tmp/foomoo/dev

In this case line 15 of test_t0_2.sh exits the script causing the trap 0 handler to kick in, unwinding the changes made so far.

Limitations
The in_t0 implementation presented in Listing 1 should be improved on and shall be addressed in a future post. The current implementation uses a single shell variable to hold all commands to be executed in the trap handler. If the contents of this variable grow it could hit a size limit. Also, it would be a good idea to have a non-successful execution of any of the command in the trap 0 handler cause a non-zero exit code to be returned by the script. This means any problems with the unmounts in the example above could then be communicated out to the caller.

Categories
development

Checking which signals trigger Trap Zero

Trap Zero is a very useful tool in shell scripting when there is a need to clean up the environment upon exit. However I wanted to be sure which of the default signal handlers for a shell would cause trap zero to be fired. Listing 1 shows the script I used to test the signal handling behaviour.

Listing 1 – trap_zero_test.sh

# No hash-bang in this script. We intend it to be run as an argument to the shell to
# permit testing against multiple shells.
#
# trap_zero_test.sh
# Script to test which signals will cause trap zero to be executed for the shell
#
# For each signal under test:
# - launch a child process which has the task writing a string to a file upon execution
# of trap zero. The child process shall send itself the signal and then sleeps until
# timeout after approximately 3 seconds unless the signal handler causes an exit.
# - Observe whether trap zero is triggered by examining the content of the written file.

# Create the temporary file for child processes to write to. Clean up the file on exit.
TEMP_FILE=$(mktemp)
trap "rm $TEMP_FILE" 0

# Print a header
printf "SignaltNametTrap 0 Executedn"
for signal in $(seq 1 15); do
printf "$signalt$(kill -l $signal)t"

# Clear the temporary file ready for writing to by the child process.
: > $TEMP_FILE

# Launch the child process using a script customised to the signal under test.
$SHELL -s <<-EOSCRIPT # Configured the trap trap "echo Trap0 > $TEMP_FILE" 0

# Send the test signal.
kill -$signal $$

# Timeout the process in 3 seconds if not alredy killed.
sleep 3 &
wait

# Reset the trap if it hasn't already been fired.
trap 0
EOSCRIPT

# Observe whether trap 0 was fired in the child process.
result="No"
[ "$(cat $TEMP_FILE)" = "Trap0" ] && result="Yes"
echo $result
done

This script will launch a child process with the task of registering trap zero and sending itself the signal under test. If trap zero is fired it will write the string Trap0 to a temporary file. The parent process will then read the contents of this file to determine whether the trap zero will fire for the particular signal under test.

The script can be executed against different shells using a command lines like (note the redirect of standard error):

  • sh trap_zero_test.sh 2>/dev/null
  • bash trap_zero_test.sh 2>/dev/null

Testing on Ubuntu 12.04 with sh, bash, dash and ksh the output in Listing 2 was received in each case. Testing on Cygwin with sh and bash the output in Listing 3 was received in both cases.

Listing 2 – Output of test_trap_zero.sh on Ubuntu 12.04 for sh, bash, dash and ksh

Signal Name Trap 0 Executed
1 HUP Yes
2 INT Yes
3 QUIT No
4 ILL Yes
5 TRAP Yes
6 ABRT Yes
7 BUS Yes
8 FPE Yes
9 KILL No
10 USR1 Yes
11 SEGV Yes
12 USR2 Yes
13 PIPE Yes
14 ALRM Yes
15 TERM Yes

Listing 3 – Output of test_trap_zero.sh on Cygwin for sh and bash

Signal Name Trap 0 Executed
1 HUP Yes
2 INT Yes
3 QUIT No
4 ILL Yes
5 TRAP Yes
6 ABRT No
7 EMT Yes
8 FPE Yes
9 KILL No
10 BUS Yes
11 SEGV Yes
12 SYS Yes
13 PIPE Yes
14 ALRM Yes
15 TERM Yes

From Listings 2 and 3 it seems we can rely on Trap 0 being executed in all major signals except QUIT, ABRT and KILL.

The KILL signal will terminate a process immediately with no opportunity for clean-up so it is expected behaviour that trap zero is not triggered in this case.

The ABRT signal is raised by a program when it experiences an abnormal situation that it cannot recover from. Immediately terminating without triggering trap zero in this case seems appropriate as we cannot be sure that the execution of trap will not also be compromised. It is interesting that Linux and Cygwin treat the handling of ABRT differently. If anyone has any info on why this difference occurs please let me know.

The QUIT signal is a keyboard signal and is usually generated by pressing Ctrl- at the terminal. This signal will normally cause a process to immediately terminate and dump core. Once again it is probably appropriate not to execute trap zero in this since any commands in that trap could alter the memory state and affect the resulting core dump.

Categories
development

APRSParser – A Spring Example

This is a class diagram for APRSParser – a small demonstration project which uses the Spring framework for object creation, dependency injection, and some basic aspect programming to provide logging.

Note: I still like to sketch out my software designs on the whiteboard rather than on the computer – hence the photo of my whiteboard rather than a nice graphic from a UML tool.

The source code for this project can be downloaded from https://github.com/danwatford/aprsparser.

This project comes with a couple of classes with main() methods, FileSourcedDecoderApp and SocketSourcedDecoderApp, intended to configure the applications objects in a manner to retrieve APRS data from a file and from an APRS-IS server respectively. The main() methods are actually very simple, deferring the decision making on wiring up the application’s object to Spring via different configuration files.

Dependency Injection
Listing 1 shows how FileSourcedDecoderApp creates an ApplicationContext based on the fileSourcedAPRSDecoder.xml file and retrieves an APRSDataSource from that context. The main() method then simply calls the run() method on the data source to start the data sourcing process. For the file data source each line of the configured file (configured via Spring properties in this case) will be read and passed to any dependent IAPRSDataSourceListener objects. (See class diagram for relationship between AbstractAPRSDataSource and IAPRSDataSourceListener).

Listing 1 – FileSourcedDecoderApp.java

package com.foomoo.aprs.aprsparser;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.foomoo.aprs.aprsparser.datasource.AbstractAPRSDataSource;

public class FileSourcedDecoderApp {

  public static void main(String[] args)
  {
    ApplicationContext context = new ClassPathXmlApplicationContext(
        "fileSourcedAPRSDecoder.xml");

    AbstractAPRSDataSource dataSource = context.getBean("dataSource", AbstractAPRSDataSource.class);
    dataSource.run();
  }
}

Lets examine the Spring configuration file to see how our objects are wired together in the Spring container. Listing 2 shows the contents of fileSourcedAPRSDecoder.xml.

Listing 2 – fileSourcedAPRSDecoder.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

  <import resource="decoderApp.xml" />

  <bean id="dataSource"
    class="com.foomoo.aprs.aprsparser.datasource.FileAPRSDataSource">
    <property name="file">
      <bean class="java.io.File">
        <constructor-arg value="APRSData.txt" />
      </bean>
    </property>
  </bean>
</beans>

The main function of fileSourcedAPRSDecoder.xml is to define a bean called dataSource and configure its dependencies. In this case a File object referring to the file containing the APRS data to be read by the parser. The other item of interest in fileSourcedAPRSDecoder.xml is the tag identifying another resource to be read and processed. The contents of this imported resource are showing in Listing 3.

Listing 3 – decoderApp.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
  xmlns:context="http://www.springframework.org/schema/context"
  xmlns:util="http://www.springframework.org/schema/util"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
  http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
  http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd">

  <bean id="locationWriter" class="com.foomoo.aprs.aprsparser.demo.LocationWriter">
    <property name="APRSItemSource" ref="aprsItemSource" />
    <property name="outputStream">
      <util:constant static-field="java.lang.System.out" />
    </property>
  </bean>

  <bean id="callsignWriter" class="com.foomoo.aprs.aprsparser.demo.UniqueCallsignWriter">
    <property name="APRSItemSource" ref="aprsItemSource" />
    <property name="outputStream">
      <bean class="java.io.FileOutputStream">
        <constructor-arg value="CallsignFile.txt" />
      </bean>
    </property>
  </bean>

  <bean id="aprsItemSource" class="com.foomoo.aprs.aprsparser.item.APRSItemSource">
    <property name="APRSDataSource">
      <ref bean="dataSource" />
    </property>
    <property name="APRSDecoder">
      <bean class="com.foomoo.aprs.aprsparser.parser.BasicAPRSParser" />
    </property>
  </bean>

  <bean id="decoderLogger" class="com.foomoo.aprs.aprsparser.logging.DecoderLogger" />

  <aop:config>
    <aop:aspect id="decoderLoggerAspect" ref="decoderLogger">
      <aop:after-throwing method="logDecodeUnsupported"
        throwing="ex"
        pointcut="execution(* com.foomoo.aprs.aprsparser.parser.IAPRSParser.parse(String))" />
      <aop:after-throwing method="logDecodeUnknown"
        throwing="ex"
        pointcut="execution(* com.foomoo.aprs.aprsparser.parser.IAPRSParser.parse(String))" />
    </aop:aspect>
  </aop:config>
</beans>

The first two beans declared in Listing 3 define the refer to the LocationWriter and UniqueCallsignWriter classes shown as being in the demo package in the class diagram. Both these classes depend on an OutputStream as somewhere for them to write the result. In the configuration in Listing 3 the locationWriter bean is injected with System.out and the callsignWriter bean is injected with a FileOutputStream to a file named CallsignFile.txt. These beans also have dependencies on IAPRSItemSource instances which are satisfied by referencing the aprsItemSource bean. The setters for the IAPRSItemSource on these beans will register the anonymous inner implementations of IAPRSItemSourceListener on the set IAPRSItemSource.

Bean aprsItemSource is the instantiation of the APRSItemSource class responsible for receiving data from a data source, passing it to an APRS parser, and then passing any resulting IAPRSItem objects to any registered listeners (i.e. the locationWriter and callsignWriter beans). The aprsItemSource bean therefore has two dependencies, satisfied by a reference to the dataSource bean declared in Listing 2, and by the declaration of a Bean instantiating BasicAPRSParser.

BasicAPRSParser is a very limited, very basic implementation of the IAPRSParser interface and supports a small subset of the possible APRS message. It is in no way a model of how to write a parser! A third party APRS parser could be integrated into this project by creating an adaptor that implements the IAPRSParser interface and maps the parser result to an IAPRSItem.

Implementing logging using Aspect-Oriented Programming
Lines 37-46 of Listing 3 show the definition of an aspect configuration which we will use to log the occurrence of the two Exceptions that can be thrown by the IAPRSParser.parse() method, APRSUnsupportedFormatException and APRSUnknownFormatException. APRSUnsupportedFormatException is thrown by the parser when it is processing a string that it recognises to be an APRS but does not support it. APRSUnknownFormatExpection is thrown when the parser doesn’t even recognise the string to be an APRS message.

I would like to log the occurrence of these two exceptions, but at different log levels. The unsupported format exception can be logged at a lower level since I know my implementation does not support all APRS messages. However I’d like the unknown format exception to be logged at a higher level since it might help identify problems within the parser. Listing 4 shows the contents of the logging class, DecoderLogger.

Listing 4 – DecoderLogger.java

package com.foomoo.aprs.aprsparser.logging;

import org.aspectj.lang.JoinPoint;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.foomoo.aprs.aprsparser.parser.IAPRSParser.APRSUnknownFormatException;
import com.foomoo.aprs.aprsparser.parser.IAPRSParser.APRSUnsupportedFormatException;

public class DecoderLogger{

  public void logDecodeUnsupported(JoinPoint jp, APRSUnsupportedFormatException ex)
  {
    Logger logger = LoggerFactory.getLogger(jp.getTarget().getClass());
    logger.debug(null, ex);
  }

  public void logDecodeUnknown(JoinPoint jp, APRSUnknownFormatException ex)
  {
    Logger logger = LoggerFactory.getLogger(jp.getTarget().getClass());
    logger.info(null, ex);
  }
}

DecoderLogger has two methods, logDecodeUnsupported() and logDecodeUnknown() which should be specified in the AOP configuration to be executed when an exception is thrown by an implementation of the IAPRSParser.parse() method. Both methods specify a JoinPoint parameter which creates a coupling to AspectJ, but I think it is worthwhile since it gives us access to the target class – the class that threw the exception. Knowing the class that threw the exception means we can follow the common pattern of selecting a logger configured for that class without having to couple DecoderLogger to that class.

I am using slf4j as my logging framework. Spring uses the Apache Commons Logging (JCL) for its logging and pulls it into the project via a maven dependency. See the Spring reference documentation for instructions on how to do this. I am using slf4j over log4j so have a log4j.properties file in the project to set up my loggers and appenders.

In DecoderLogger.java (Listing 4) the code to carry out the logging of the two exceptions is virtually identical except for the logging method called on the Logger. The debug() method is called for APRSUnsupportedFormatExceptions, and the info() method is called for APRSUnknownFormatExceptions.

In decoderApp.xml (Listing 3) two after-throwing advice items are specified, both with pointcuts that match execution of the IAPRSParser.parse() method. Notice that the IAPRSParser interface is used in defining the pointcut rather than an implementation of the interface, meaning this advice will be weaved into any implementation of the interface. Each of the after-throwing advice items specify the method to be called when the advice is triggered, the logDecodeUnsupported() and logDecodeUnknown() methods. The exception parameters defined for these two methods is used by Spring to filter the exceptions that will be handled by the after-throwing advice.

Running the application

To run the application (in eclipse) right-click on FileSourceDecoderApp.java in the Package Explorer and select Run As->Java Application. FileAPRSDataSource will read from the APRSData.txt file in the working directory. UniqueCallsignWriter will write out a list of unique callsigns to the CallsignFile.txt. LocationWriter will write location information for successfully parsed APRS messages to the console. Additionally some logging output will also be written to the console.

The log (seen on the console and in files all-info.log and all-app-info.log) will contain a lot of logging entries for the APRSUnknownFormatException which makes things pretty difficult to read. Changing the logging level for the parser in log4.properties from

> log4j.category.com.foomoo.aprs.aprsparser.parser=INFO

to

> log4j.category.com.foomoo.aprs.aprsparser.parser=WARN

Running the application again will result in a much easier to read console as in Listing 5.

Listing 5 – excerpt from console for FileSourceDecoderApp

G4NGV-7 ! Long: 2.6109999999999998 Lat: 53.42183333333333 (G4NGV-7>APT311,RELAY,TRACE2-2,qAR,MB7UWC:!5325.31N/00236.66Wj079/047/A=000085)
MB7UW ! Long: 1.3575 Lat: 51.065 (MB7UW>BEACON,WIDE5-5,qAR,MB7UDI:!5103.90N/00121.45W#PHG3630 HantsRaynet Digi Winchester User WIDEn-n for traceable paths)
EI2DBP ! Long: 7.9111666666666665 Lat: 52.8335 (EI2DBP>APZ19,qAR,EI3RCW-2:!5250.01NS00754.67W#PHG5730/W3, SEARG APRS Digi      Devil's Bit   )
GB3CG / Long: 2.1743333333333332 Lat: 51.869 (GB3CG>APZS05,TCPIP*,qAC,T2IRELAND:/251501z5152.14N/00210.46WmRV:58 145.725MHz CTCSS:118.8Hz /A=000513)
EI2GN-2 ! Long: 8.245333333333333 Lat: 51.93933333333333 (EI2GN-2>APOT2A,EI2FHP,WIDE2-1,qAR,EI3RCW-2:!5156.36NS00814.72W# 13.7V)
GB7SF-B ! Long: 1.4435 Lat: 53.41983333333334 (GB7SF-B>APJI23,TCPIP*,qAC,GB7SF-BS:!5325.19ND00126.61W&RNG0020 440 Voice 439.7375 -9.00 MHz)
GB7SF-C ! Long: 1.4435 Lat: 53.41983333333334 (GB7SF-C>APJI23,TCPIP*,qAC,GB7SF-CS:!5325.19ND00126.61W&RNG0020 2m Voice 145.7375 -0.600 MHz)

The console is now much easier to read, however we should probably put some effort in the future into fixing our parser to deal with those APRS messages which are currently causing APRSUnknownFormatExceptions to be thrown.

Two other applications are provided in this project, SocketSourcedDecoderApp and SocketCaptureApp.

SocketSourcedDecoderApp uses an ApplicationContext based on the socketSourcedAPRSDecoder.xml file as shown in Listing 6. The dataSource bean is an instantiation of the SocketAPRSDataSource class which written to receive APRS data from an

APRS-IS server. The bean is dependency injected with a java.net.Socket to communicate over, a user name and a password. If you find you are unable to communicate with the server specified in the listing you can try a different server from the list at http://www.aprs-is.net/APRSServers.aspx. A lot of volunteers contribute personal servers and bandwidth to the APRS-IS project so please be considerate of their resources when trying out this application.

Listing 6 – socketSourcedAPRSDecoder.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
  xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

  <import resource="decoderApp.xml" />

  <bean id="dataSource"
    class="com.foomoo.aprs.aprsparser.datasource.SocketAPRSDataSource">
    <property name="socket">
      <bean class="java.net.Socket">
        <constructor-arg value="ahubswe.net" />
        <constructor-arg value="14578" />
      </bean>
    </property>
    <property name="user" value="GUEST" />
    <property name="password" value="-1" />
  </bean>
</beans>

The SocketCaptureApp makes use of the socketCapture.xml Spring configuration file. Like the SocketSourceDecoderApp application above it makes use of the SocketAPRSDataSource class to connect to an APRS-IS server and retrieve APRS data. However rather than having APRSItemSource listen to the data source it uses another IAPRSDataSourceListener, DataSourceCapture, to write the APRS data to a file, DataSourceCapture.txt. You can use the capture data as the input file to the FileSourcedDecoderApp described earlier in this article.

Categories
development

Multiple Advice at the Same Pointcut

Continuing from the program presented in the previous post (another-aspect-to-spring.html) we’ll take the source code (from here if needed) and weave in an additional Advice to count the words being said by ChatterBox. First we’ll add the code to ChatterBoxCounerAdvice (see Listing 1) which will examine the strings to be ‘said’ by ChatterBox and keep a running sum of the words in those strings. A new field, wordCount (line 9), and a new method, countWords() has been added (line 14). The report() method (line 12) has been amended to report on the number of words counted in addition to the number of characters. Second we’ll need to associate our new advice, the countWords() method, with the required pointcut in ChatterBox. To do this we’ll add a new <aop:before> tag to the aspect already in helloSpring.xml (see Listing 2 lines 29-30). Running HelloApp, besides the Spring logging output, the console should show:

ten ten te
twenty twenty twenty
thirty thirty thirty thirty th
Characters Counted: 60 Words Counted: 11

A total of 11 words were counted in the strings passed to ChatterBox.

Download the source code for this post.

Listing 1 – ChatterBoxAdvice.java

package com.foomoo.example.spring.helloSpring.advice;

import java.io.OutputStream;
import java.io.PrintStream;

public class ChatterBoxCounterAdvice {
 private OutputStream outputStream;
 private int characterCount;
 private int wordCount;

 public void setOutputStream(OutputStream outputStream) {this.outputStream = outputStream;}
 public void report() {new PrintStream(outputStream).println("Characters Counted: " + characterCount + " Words Counted: " + wordCount);}
 public void countCharacters(String aStringToCount){characterCount += aStringToCount.length();}
 public void countWords(String aStringToCount){wordCount += aStringToCount.trim().split(" +").length;}
}

Listing 2 – helloSpring.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:context="http://www.springframework.org/schema/context"
 xmlns:util="http://www.springframework.org/schema/util"
 xmlns:aop="http://www.springframework.org/schema/aop"
 xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd
  http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
  http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd
  http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">

 <bean id="chatterBox"
  class="com.foomoo.example.spring.helloSpring.ChatterBox">
  <property name="outputStream">
   <util:constant static-field="java.lang.System.out" />
  </property>
 </bean>
 <bean id="counter"
  class="com.foomoo.example.spring.helloSpring.advice.ChatterBoxCounterAdvice">
  <property name="outputStream">
   <util:constant static-field="java.lang.System.out" />
  </property>
 </bean>
 <aop:config>
  <aop:aspect id="chatterBoxCounterAspect" ref="counter">
   <aop:before method="countCharacters"
    pointcut="execution(* com.foomoo.example.spring.helloSpring.ChatterBox.saySomething(String)) and args(aStringToCount)" />
   <aop:before method="countWords"
    pointcut="execution(* com.foomoo.example.spring.helloSpring.ChatterBox.saySomething(String)) and args(aStringToCount)" />
  </aop:aspect>
 </aop:config>
</beans>