This section describes a number of common use cases in which Smooks can be used.
TemplatingSmooks provides two main Templating options:
What Smooks adds here is the ability to use these Templating technologies within the context of a Smooks filtering process. This means that these technologies:
- Can be applied to a source message on a per fragment basis Vs the whole message i.e. "Fragment Based Transforms". This is useful in situations where (for example) you only wish to insert a piece of data into a message at a specific point (e.g. add headers to a SOAP message), but you don't wish to interfere with the rest of the message stream. In this case you can "target" (apply) the template to the fragment of interest.
- Can take advantage of other Smooks technologies (Cartridges) such as the Javabean Cartridge. In this scenario, you can use the Javabean Cartridge to decode and bind data from the message into the Smooks bean context and then use (reference) that decoded data from inside your FreeMarker template (Smooks makes this data available to FreeMarker).
- Can be used to process huge message streams (GBs), while at the same time maintain a relatively simple processing model, with a low memory footprint. See Mixing DOM and SAX Models with Smooks.
- Can be used for generating "Split Message Fragments" that can then be routed (using Smooks Routing components) to physical endpoints (File, JMS), or logical endpoints on an ESB (a "Service").
Smooks can also be extended (and will) to add support other templating technologies.
FreeMarker is a very powerful Templating Engine. Smooks allows FreeMarker to be used as a means of generating text based content that can then be inserted into a message stream (aka a "Fragment Transform"), or used as a "Split Message Fragment" for routing to another process (see above).
Configuring FreeMarker templates in Smooks is done through the http://www.milyn.org/xsd/smooks/freemarker-1.1.xsd configuration namespace. Just configure this XSD into your IDE and you're in business!
Example - Inline Template:
Example - External Template Reference:
Smooks allows you to perform a number of operations with the Templating result. This is controlled by the <use> element, which is added to the <ftl:freemarker> configuration.
Example - Inlining the Templating Result:
Inlining allows you to inline the templating result into the Smooks.filter Result object. A number of directives are supported:
- addto: Add the templating result to the targeted element.
- replace (default): Use the templating result to replace the targeted element. This is the default behavior for the <ftl:freemarker> configuration when the <use> element is not configured.
- insertbefore: Add the templating result before to the targeted element.
- insertafter: Add the templating result after to the targeted element.
Using <ftl:bindTo>, you can bind the Templating result to the Smooks bean context. The templating result can then be accessed by other Smooks components, such as the routing components. This can be especially useful for splitting huge messages into smaller (more consumable) messages that can then be routed to another process for handling.
Example - Binding the Templating Result to the Smooks bean context:
(See full example in the split-transform-route-jms tutorial)
Using <ftl:outputTo>, you can direct Smooks to write the templating result directly to an OutputStreamResource. This is another useful mechanism for splitting huge messages into smaller (more consumable) messages that can then be processed individually.
Example - Writing the Template Result to an OutputStreamSource:
(See full example in the split-transform-route-file tutorial)
FreeMarker Transforms using NodeModels
The easiest way to construct message transforms in FreeMarker is to use FreeMarker's NodeModel facility. This is where FreeMarker uses a W3C DOM as the Templating model, referencing the DOM nodes directly from inside the FreeMarker template.
Smooks adds two additional capabilities here:
- The ability to perform this on a fragment basis i.e. you don't have to use the full message as the DOM model, just the targeted fragment.
- The ability to use NodeModel in a streaming filter process i.e. Mixing DOM and SAX Models with Smooks.
- The ability to use it on non XML messages (CSV, EDI etc).
To use this facility in Smooks, you need to define an additional resource the defines/declares the NodeModels to be captured (created in the case of SAX streaming):
(See full example in the xml-to-xml tutorial)
FreeMarker and the Javabean Cartridge
FreeMarker NodeModel is very powerful and easy to use. The tradeoff is obviously that of performance. Constructing W3C DOMs is not cheap. It also may be the case that the required data has already been extracted and populated into a Java Object model anyway e.g. where the data also needs to be routed to a a JMS endpoint as Java Objects.
In situations where using the NodeModel is not practical, Smooks allows you to use the Javabean Cartridge to populate a proper Java Object Model (or a Virtual Model). This model can then be used in the FreeMarker Templating process. See the docs on the Javabean Cartridge for more details.
Example (using a Virtual Model):
(See full example in the split-transform-route-file tutorial)
Configuring XSL templates in Smooks is almost identical to that of configuring FreeMarker templates (See above). It is done through the http://www.milyn.org/xsd/smooks/xsl-1.1.xsd configuration namespace. Just configure this XSD into your IDE and you're in business!
As with FreeMarker, external templates can be configured via URI reference in the <xsl:template> element.
As already stated, configuring XSL templates in Smooks is almost identical to that of configuring FreeMarker templates (See above). For this reason, please consult the FreeMarker configuration docs. Translating to XSL equivalents is simply a matter of changing the configuration namepace. Please read the following sections however.
Points to Note Regarding XSL Support
- XSL Templating is only supported through the DOM Filter. It is not supported through the SAX Filter. This can (depending on the XSL being applied) result in lower performance when compared to SAX based application of XSL.
- Smooks applies XSLs on a message fragment basis (i.e. DOM Element Nodes) Vs to the whole document (i.e. DOM Document Node). This can be very useful for fragmenting/modularizing your XSLs, but don't assume that an XSL written and working standalone (externally to Smooks and on the whole document) will automatically work through Smooks without modification. For this reason, Smooks does handle XSLs targeted at the document root node differently in that it applies the XSL to the DOM Document Node (Vs the root DOM Element). The basic point here is that if you already have XSLs and are porting them to Smooks, you may need to make some tweaks to the Stylesheet.
- XSLs typically contain a template matched to the root element. Because Smooks applies XSLs on a fragment basis, matching against the "root element" is no longer valid. You need to make sure the Stylesheet contains a template that matches against the context node (i.e. the targeted fragment).
My XSLT Works Outside Smooks, but not Inside?
This can happen and is most likely going to be a a result of one of the following:
- The Fragment based Processing Model: Your Stylesheet contains a template that's using an absolute path reference to the document root node. This will cause issues in the Smooks Fragment based Processing Model because the element being targeted by Smooks is not the document root node. Your XSLT needs to contain a template that matches against the context node being targeted by Smooks. See the following example.
- SAX Vs DOM Processing: You are not comparing like with like. Smooks currently only supports a DOM based processing for XSL. In order to do an accurate comparison, you need to use a DOMSource (namespace aware) when executing the XSLT outside Smooks. It has been noticed that a given XSL Processor does not always produce the same output when applying a given XSLT using SAX or DOM.
Mixed DOM and SAX with Groovy
What this means is that you can use Groovies DOM utilities to process the targeted message fragment. The "element" received by the Groovy script will be a DOM Element. This makes Groovy scripting via the SAX filter a lot easier, while at the same time maintaining the ability to process huge messages in a streamed fashion.
|Mixed SAX and DOM Gotchas|
Mixed DOM and SAX Example
Take an XML message such as:
Using Groovy, we want to modify the "supplies" category in the shopping list, adding 2 to the quantity, where the item is "Pens". To do this, we write a simple little Groovy script and target it at the <category> elements in the message. The script simple iterates over the <item> elements in the category and increments the quantity by 2, where the category type is "supplies" and the item is "Pens":
Processing Non-XML Data (CSV, EDI, JSON, Java etc)Smooks relies on a "Stream Reader" for generating a stream of SAX events from the Source message data stream. A Stream Reader is a class that implements the XMLReader interface (or the SmooksXMLReader interface).
By default, Smooks uses the default XMLReader (XMLReaderFactory.createXMLReader()), but can be easily configured to read non-XML data Sources by configuring a specialized XMLReader:
The reader can also be configured with a set of handlers, features and parameters. Here is a full example configuration.
A number of non-XML Readers are available with Smooks out of the box:
Any of the above XMLReaders can be configured as outlined above, but some of them have a specialized configuration namespaces that simplify configuration.
Example - CSVReader Configuration
Example - SmooksEDIReader Configuration
Example - JSONReader Configurations
To set features on the default reader, simply omit the class name from the configuration:
Java BindingThe Smooks JavaBean Cartridge allows you to create and populate Java objects from your message data (i.e. bind data to). You need to add the milyn-smooks-javabean-1.1.jar to your classpath. If you are using Maven then add the org.milyn:milyn-smooks-javabean:1.1 dependency to the POM.
Note: As you know, Smooks supports a range of source data formats (XML, EDI, CSV, Java etc), but for the purposes of this topic, we will always refer to the message data in terms of an XML format.
In the examples we will be referring a lot to the following XML message data:
In some examples we will use different XML message data. Where this happens, the data is explicitly defined there then.
The JavaBean Cartridge is used via the http://www.milyn.org/xsd/smooks/javabean-1.1.xsd configuration namespace. Install the schema in your IDE and avail of autocompletion.
An example configuration:
This configuration simply creates an instance of the example.model.Order class and binds it into the bean context under the beanId "order". The instance is created at the very start of the message on the $document element (i.e. the start of the root <order> element).
The createOnElement attribute controls when the bean instance is created. Population of the bean properties is controlled through the binding configurations (child elements of the <jb:bindings> element).
The namespace of the createOnElement can be specified via the createOnElementNS attribute.
The bean context (also known as "bean map") is a very important part of the JavaBean Cartridge. One bean context is created per execution context (i.e. per Smooks.filter operation). Every bean, created by the cartridge, is put into this context under its beanId. If you want the contents of the bean context to be returned at the end of the Smooks.filter process, supply a org.milyn.delivery.java.JavaResult object in the call to Smooks.filter method. The following example illustrates this principal:
If you need to access the bean context beans at runtime (e.g. from a customer Visitor implementation), you do so via the BeanRepository class.
The Javabean cartridge has the following conditions for javabeans:
- A public no-argument constructor
- Public property setter methods. The don't need to follow any specific name formats, but it would be better if they do follow the standard property setter method names.
- Setting javabean properties directly is not supported.
The configuration shown above simply created the example.model.Order bean instance and bound it into the bean context. This section will describe how to bind data into that bean instance.
The Javabean Cartridge provides support for 3 types of data bindings, which are added as child elements of the <jb:bindings> element:
Value binding, via the <jb:value> binding configuration. This is used to bind data values from the Source message event stream into the target bean.
Wiring binding, via the <jb:wiring> binding configuration. This is used to "wire" another bean instance from the bean context into a bean property on the target bean. This is the configuration that allows you to construct an object graph (Vs just a loose bag of Java object instances).
Expression based binding, via the <jb:expression> configuration. As it's name suggests, this configuration is used to bind in a value calculated from an expression, a simple example being the binding of an order item total value into an OrderItem bean based on the result of an expression that calculates the value from the items price and quantity (e.g. "price * quantity").
Taking the Order XML message (previous section), lets see what the full XML to Java binding configuration might be. We've seen the order XML (above). Now lets look at the Java Objects that we want to populate from that XML message (getters and setters not shown):
The Smooks config required to bind the data from the order XML and into this object model is as follows:
Configuration (1) defines the creation rules for the com.acme.Order bean instance (top level bean). We create this bean instance at the very start of the message i.e. on the <order> element (createOnElement="order"). In fact, we create the each of the beans instances ((1), (2), (3) - all accepts the (4)) at the very start of the message (on the <order> element). We do this because there will only ever be a single instance of these beans in the populated model. Configurations (1.a) and (1.b) define the wiring configuration for wiring the Header and List<OrderItem> bean instances ((2) and (3)) into the Order bean instance (see the beanIdRef attribute values and how the reference the beanId values defined on (2) and (3)). The property attributes on (1.a) and (1.b) define the Order bean properties on which the wirings are to be made.
Configuration (2) creates the com.acme.Header bean instance. Configuration (2.a) defines a value binding onto the Header.date property. Note that the data attribute defines where the binding value is selected from the source message; in this case it is coming from the header/date element. Also note how it defines a decodeParam sub-element. This configures the Date Decoder (decoder="Date").
Configuration (3) creates the List<OrderItem> bean instance for holding the OrderItem instances. Configuration (3.a) wires in the orderItem bean ((4)) instances into the list. Note how this wiring does not define a property attribute. This is because it wires into a Collection (same applies if wiring into an array).
Configuration (4) creates the OrderItem bean instances. Note how the createOnElement is set to the <order-item> element. This is because we want a new instance of this bean to be created for every <order-item> element (and wired into the List<OrderItem> (3.a)). If the createOnElement attribute for this configuration was not set to the <order-item> element (e.g. if it was set to one of the <order>, <header> or <order-items> elements), then only a single OrderItem bean instance would be created and the binding configurations ((4.a) etc) would overwrite the bean instance property bindings for every <order-item> element in the source message i.e. you would be left with a List<OrderItem> with just a single OrderItem instance containing the <order-item> data from the last <order-item> encountered in the source message.
Extended Lifecycle Bindings
Binding Key Value Pairs into Maps
If the <jb:value property> attribute of a binding is not defined (or is empty), then the name of the selected node will be used as the map entry key (where the beanClass is a Map).
There is one other way to define the map key. The value of the <jb:value property> attribute can start with the @ character. The rest of the value then defines the attribute name of the selected node, from which the map key is selected. The following example demonstrates this:
An the config:
This would create a HashMap with three entries with the keys set [key1, key2, key3].
Off course the @ character notation doesn't work for bean wiring. The cartridge will simply use the value of the property attribute, including the @ character, as the map entry key.
Virtual Object Models (Maps & Lists)
It is possible to create a complete object model without writing your own Bean classes. This virtual model is created using only maps and lists . This is very convenient if you use the javabean cartridge between two processing steps. For example from xml -> java -> edi.
The following example demonstrates the principle:
Take a look at the milyn/smooks-examples/xml-to-java-virtual for another example.
Merging Multiple Data Entities Into a Single Binding
This can be achieved using Expression Based Bindings (<jb:expression>).
Generating the Smooks Binding Configuration
The Javabean Cartridge contains the org.milyn.javabean.gen.ConfigGenerator utility class that can be used to generate a binding configuration template. This template can then be used as the basis for defining a binding.
From the commandline:
- The "-c" commandline arg specifies the root class of the model whose binding config is to be generated.
- The "-o" commandline arg specifies the path and filename for the generated config output.
- The "-p" commandline arg specifies the path and filename optional binding configuration file that specifies aditional binding parameters.
The optional "-p" properties file parameter allows specification of additional config parameters:
- packages.included: Semi-colon seperated list of packages scoping classes to be included in the binding generation.
- packages.excluded: Semi-colon seperated list of packages scoping classes to be excluded in the binding generation.
After running this utility against the target class, you typically need to perform the following follow-up tasks in order to make the binding configuration work for your Source data model.
- For each <jb:bindings> element, set the createOnElement attribute to the event element that should be used to create the bean instance.
- Update the <jb:value data> attributes to select the event element/attribute supplying the binding data for that bean property.
- Check the <jb:value decoder> attributes. Not all will be set, depending on the actual property type. These must be configured by hand e.g. you may need to configure <jb:decodeParam> sub-elements for the decoder on some of the bindings. E.g. for a date field.
- Double-check the binding config elements (<jb:value> and <jb:wiring>), making sure all Java properties have been covered in the generated configuration.
Determining the selector values can sometimes be difficult, especially for non XML Sources (Java etc). The Html Reporting tool can be a great help here because it helps you visualise the input message model (against which the selectors will be applied) as seen by Smooks. So, first off, generate a report using your Source data, but with an empty transformation configuration. In the report, you can see the model against which you need to add your configurations. Add the configurations one at a time, rerunning the report to check they are being applied.
The following is an example of a generated configuration. Note the "$TODO$" tokens.
Java to Java Transformations
Smooks can transform one Java object graph to another Java object graph.For this transformation Smooks uses the SAX processing model, which means no intermediate object model is constructed for populating the target Java object graph. Instead, we go straight from the source Java object graph, to a stream of SAX events, which are used to populate the target Java object graph.
Source and Target Object Models
The required mappings from the source to target Object models are as follows:
Source Model Event Stream
Using the Html Smooks Report Generator tool, we can see that the Event Stream produced by the source Object Model is as follows:
So we need to target the Smooks Javabean resources at this event stream. This is shown in the Smooks Configuration.
The Smooks configuration for performing this transform ("smooks-config.xml") is as follows (see the Source Model Event Stream above):
The source object model is provided to Smooks via a org.milyn.delivery.JavaSource Object. This object is created by passing the constructor the root object of the source model. The resulting JavaSource object is used in the Smooks#filter method. The resulting code could look like as follows:
Processing Huge Messages (GBs)One of the main features introduced in Smooks v1.0 is the ability to process huge messages (Gbs in size). Smooks supports the following types of processing for huge messages:
- One-to-One Transformation: This is the process of transforming a huge message from it's source format (e.g. XML), to a huge message in a target format e.g. EDI, CSV, XML etc.
- Splitting & Routing: Splitting of a huge message into smaller (more consumable) messages in any format (EDI, XML, Java etc.) and Routing of those smaller messages to a number of different destination types (File, JMS, Database).
- Persistence: Persisting the components of the huge message to a Database, from where they can be more easily queried and processed. Within Smooks, we consider this to be a form of Splitting and Routing (routing to a Database).
All of the above is possible without writing any code (i.e. in a declarative manner). Typically, any of the above types of processing would have required writing quite a bit of ugly/unmaintainable code. It might also have been implemented as a multi-stage process where the huge message is split into smaller messages (stage #1) and then each smaller message is processed in turn to persist, route etc. (stage #2). This would all be done in an effort to make that ugly/unmaintainable code a little more maintainable and reusable. With Smooks, most of these use-cases can be handled without writing any code. As well as that, they can also be handled in a single pass over the source message, splitting and routing in parallel (plus routing to multiple destinations of different types and in different formats).
When processing huge messages with Smooks, make sure you are using the SAX filter.
If the requirement is to process a huge message by transforming it into a single message of another format, the easiest mechanism with Smooks is to apply multiple FreeMarker templates to the Source message Event Stream, outputting to the Smooks.filter Result stream.
This can be done in one of 2 ways with FreeMarker templating, depending on the type of model that's appropriate:
- Using FreeMarker + NodeModels for the model.
- Using FreeMarker + a Java Object model for the model. The model can be constructed from data in the message, using the Javabean Cartridge.
Option #1 above is obviously the option of choice, if the tradeoffs are OK for your use case. Please see the FreeMarker Templating docs for more details.
The following images shows an <order> message, as well as the <salesorder> message to which we need to transform the <order> message:
Imagine a situation where the <order> message contains millions of <order-item> elements. Processing a huge message in this way with Smooks and FreeMarker (using NodeModels) is quite straightforward. Because the message is huge, we need to identify multiple NodeModels in the message, such that the runtime memory footprint is as low as possible. We cannot process the message using a single model, as the full message is just too big to hold in memory. In the case of the <order> message, there are 2 models, one for the main <order> data (blue highlight) and one for the <order-item> data (beige highlight):
So in this case, the most data that will be in memory at any one time is the main order data, plus one of the order-items. Because the NodeModels are nested, Smooks makes sure that the order data NodeModel never contains any of the data from the order-item NodeModels. Also, as Smooks filters the message, the order-item NodeModel will be overwritten for every order-item (i.e. they are not collected). See Mixing DOM and SAX Models with Smooks.
Configuring Smooks to capture multiple NodeModels for use by the FreeMarker templates is just a matter of configuring the DomModelCreator Visitor, targeting it at the root node of each of the models. Note again that Smooks also makes this available to SAX filtering (the key to processing huge message). The Smooks configuration for creating the NodeModels for this message are:
Now the FreeMarker templates need to be added. We need to apply 3 templates in total:
- A template to output the order "header" details, up to but not including the order items.
- A template for each of the order items, to generate the <item> elements in the <salesorder>.
- A template to close out the message.
With Smooks, we implement this by defining 2 FreeMarker templates. One to cover #1 and #3 (combined) above, and a seconds to cover the <item> elements.
The first FreeMarker template is targeted at the <order-items> element and looks as follows:
You will notice the <?TEMPLATE-SPLIT-PI?> Processing Instruction. This tells Smooks where to split the template, outputting the first part of the template at the start of the <order-items> element, and the other part at the end of the <order-items> element. The <item> element template (the second template) will be output in between.
The second FreeMarker template is very straightforward. It simply outputs the <item> elements at the end of every <order-item> element in the source message:
Because the second template fires on the end of the <order-item> elements, it effectively generates output into the location of the <?TEMPLATE-SPLIT-PI?> Processing Instruction in the first template. Note that the second template could have also referenced data in the "order" NodeModel.
And that's it! This is available as a runnable example in the Tutorials section.
This approach to performing a One-to-One Transformation of a huge message works simply because the only objects in memory at any one time are the order header details and the current <order-item> details (in the Virtual Object Model). Obviously it can't work if the transformation is so obscure as to always require full access to all the data in the source message e.g. if the messages needs to have all the order items reversed in order (or sorted). In such a case however, you do have the option of routing the order details and items to a database and then using the database's storage, query and paging features to perform the transformation.
Splitting & Routing
Another common approach to processing large/huge messages is to split them out into smaller messages that can be processed independently. Of course Splitting and Routing is not just a solution for processing huge messages. It's often needed with smaller messages too (message size may be irrelevant) where, for example, order items in an an order message need to be split out and routed (based on content - "Content Base Routing") to different departments or partners for processing. Under these conditions, the message formats required at the different destinations may also vary e.g.
- "destination1" required XML via the file system,
- "destination2" requires Java objects via a JMS Queue,
- "destination3" picks the messages up from a table in a Database etc.
- "destination4" requires EDI messages via a JMS Queue,
- etc etc
With Smooks v1.0, all of the above is possible. You can perform multiple splitting and routing operations to multiple destinations (of different types) in a single pass over a message.
The key to processing huge messages is to make sure that you always maintain a small memory footprint. You can do this using the Javabean Cartridge by making sure you're only binding the most relevant message data (into the bean context) at any one time. In the following sections, the examples are all based on splitting and routing of order-items out of an order message. The solutions shown all work for huge messages because the Smooks Javabean Cartridge binding configurations are implemented such that the only data held in memory at any given time is the main order details (order header etc) and the "current" order item details.
Complex splitting operations are supported through use of the Javabean Cartridge to extract the data for the split-message. In this way, you can extract and recombine data from across different sub-hierarchies of the Source message, to produce the split messages. It also means you can (through the use of templating) easily generate the split messages in a range of different formats. More on this later.
Routing to File
File based routing is performed via the the <file:outputStream> configuration from the http://www.milyn.org/xsd/smooks/file-routing-1.1.xsd configuration namespace.
This section illustrates how you can combine the following Smooks functionality to split a message out into smaller messages on the file system.
- The Javabean Cartridge for extracting data from the message and holding it in variables in the bean context. In this case, we could also use DOM NodeModels for capturing the order and order-item data to be used as the templating data models.
- The <file:outputStream> configuration from the Routing Cartridge for managing file system streams (naming, opening, closing, throttling creation etc).
- The Templating Cartridge (FreeMarker Templates) for generating the individual split messages from data bound in the bean context by the Javabean Cartridge (see #1 above). The templating result is written to the file output stream (#2 above).
In the example, we want to process a huge order message and route the individual order item details to file. The following illustrates what we want to achieve. As you can see, the split messages don't just contain data from the order item fragments. They also contain data from the order header and root elements.
To achieve this with Smooks, we assemble the following Smooks configuration:
Smooks Resource configuration #1 and #2 define the Java Bindings for extracting the order header information (config #1) and the order-item information (config #2). This is the key to processing a huge message; making sure that we only have the current order item in memory at any one time. The Smooks Javabean Cartridge manages all this for you, creating and recreating the orderItem beans as the <order-item> fragments are being processed.
The <file:outputStream> configuration in configuration #3 manages the generation of the files on the file system. As you can see from the configuration, the file names can be dynamically constructed from data in the bean context. You can also see that it can throttle the creation of the files via the "highWaterMark" configuration parameter. This helps you manage file creation so as not to overwhelm the target file system.
Smooks Resource configuration #4 defines the FreeMarker templating resource used to write the split messages to the OutputStream created by the <file:outputStream> (config #3). See how config #4 references the <file:outputStream> resource. The Freemarker temaplte is as follows:
Routing to JMS
JMS routing is performed via the the <jms:router> configuration from the http://www.milyn.org/xsd/smooks/jms-routing-1.1.xsd configuration namespace.
The following is an example <jms:router> configuration that routes an "orderItem_xml" bean to a JMS Queue named "smooks.exampleQueue" (also read the "Routing to File" example):
In this case, we route the result of a FreeMarker templating operation to the JMS Queue (i.e. as a String). We could also have routed a full Object Model, in which case it would be routed as a Serialized ObjectMessage.
Routing to a Database
Routing to a Database is also quite easy. Please read the "Routing to File" section above before reading this section.
So we take the same scenario as with the File Routing example above, but this time we want to route the order and order item data to a Database. This is what we want to achieve:
First we need to define a set of Java bindings that extract the order and order-item data from the data stream:
Next we need to define datasource configuration and a number of <db:executor> configurations that will use that datasource to insert the data that was bound into the Java Object model into the database.
The Datasource configuration (namespace http://www.milyn.org/xsd/smooks/datasource-1.1.xsd):
The <db:executor> configurations (namespace http://www.milyn.org/xsd/smooks/db-routing-1.1.xsd):
Check out the db-extract-transform-load example.
Message Splitting & Routing
Please refer to the Splitting & Routing section in the previous section.
Persistence (Database Reading and Writing)Data can be read from and written/routed to a database using the <db:executor> configuration from the http://www.milyn.org/xsd/smooks/db-routing-1.1.xsd namespace. The <db:executor> requires a Datasource to be configured. This is done via the <ds:direct> and <ds:JNDI> configurations from the http://www.milyn.org/xsd/smooks/datasource-1.1.xsd configuration namespace.
See the "Routing to a Database" section.
Use the SQLExecutor to query a database. The queried data will be bound to the bean context (ExecutionContext). Use the bound query data to enrich your messages e.g. where you are splitting and routing.