Tuesday, 22 November 2016

How to resolve "Self extraction to C:\Users\temp\AppData\Local\Temp\orcl942564381009262456.tmp failed error while installing ODI 12c

While trying to install ODI 12c, When tried invoking the jar file in CMD then got an error
"Self extraction to C:\Users\temp\AppData\Local\Temp\orcl942564381009262456.tmp failed.

For some reason the 2nd file was not detected. So Solution is to keep both disk file installable in same one folder.

Which will resolve the issue.

Tuesday, 8 November 2016

Difference between OSB 11g And OSB 12c

Following below are the difference between OSB 11g and 12c:-

(I) In the new version Oracle Service Bus (OSB) is renamed as Service Bus.
(ii) Eclipse tool is needed to develop an interface in OSB 11g where as JDeveloper tool is needed to develop an interface in OSB 12c. 
(iii) Re-sequencing in OSB, In 11g this feature was available in Mediator, in Oracle SOA 12c this features added to service bus also, with the help of this feature we able to process the request message in proper sequence.
(iv) Pipeline is separated from proxy service and it is separate component in 12c.
(v) MDS support is provided for OSB.
(vi) Because a SB project is more like a SCA Composite (same overview) the Proxy is split from the Pipeline. This means that the Proxy and the Pipeline are two individual things. With this concept multiple Proxies can be wired to one Pipeline. Say you have a "Any XML" Pipeline, you can process the data from both Proxies (i.g. File adapter and JMS Queue).
(vii) One of SOA Suite 12c pillar is Mobile Enablement. You can expose a Pipeline as a REST service. When doing it creates a REST binding just like in a SCA Composite. In the wizard you can assign specific Resource Paths and Operation for the operations of the Pipeline. Downside is that you can only expose Pipelines that have a WSDL interface.
(viii) SB also supports the use of templates. But it works a little different. Templates are Pipeline-based, this means that you can select a template when creating a new Pipeline. There are two types of templates; Unlinked, which is a copy of the template, and Linked, Pipeline stays connected to template. In the template you can use Placeholders so permit changes to that part of the Pipeline. If the Pipeline is linked to a template and the template is changed the linked Pipeline will inherent these changes.

Some more useful link provided by oracle

http://education.oracle.com/pls/web_prod-plq-dad/web_prod.view_pdf?c_id=D94373GC10&c_org_id=32&c_lang=US

http://www.oracle.com/technetwork/middleware/soasuite/overview/wp-soa-suite-whats-new-12c-2217186.pdf

Oracle WebLogic Server on Docker

What is Docker:-

Docker is a virtualization technology that uses containers.  A container is a feature that was added to the Linux kernel recently.  Solaris has had containers (or ‘zones’) for a long time.
Unlike a VM, it does not emulate a processor and run its own copy of the operating system, with its own memory, and virtual devices.  Instead, it shares the host operating system, but has its own file system, and uses a layering technology to overlay sparse file systems on top of each other to create its file system.
When you are ‘in’ the container, it looks like you are on a real machine, just like when you are ‘in’ a VM.  The difference is that the container approach uses a lot less system resources than the VM approach, since it is not running another copy of the operating system.This means that more of your physical memory is available to run the actual application you care about, and less of it is consumed by the virtualization software and the virtualized operating system.

You can use Linux containers without using Docker.  Docker just makes the whole experience a lot more pleasant.  Docker allows you to create a container from an ‘image’, and to save the changes that you make, or to throw them away when you are done with the container.

These images are versioned, and they are layered on top of other images.  So they are reusable.  For example, if you had five demo/training environments you wanted to use, but they all have SOA Suite, WebLogic, JDK, etc., in them – you can put SOA Suite into one image, and then create five more images for each of the five demo/training environments – each of these as a layer on top of the SOA image.  Now if you had one of those five ‘installed’ on your machine and you wanted to fire up one of the others, Docker allows you to just pull down that relatively small demo image and run it right on top of the relatively large SOA image you already have.

If you customize the demo and want to share the customizations with others, you can ‘push’ your image back to a shared registry (repository) of images.  Docker is just going to push the (relatively very small) changes you made, and other people can then use them by running that image on top of the ones they already have.

Running your customized image on top of the others they have will not change the others, so they can still use them any time they need them.  And this is done without the need for the relatively large ‘snapshots’ that you would create in a VM to achieve the same kind of flexibility.

Here are some useful link for establishing Oracle fusion weblogic setupe with docker:-

http://www.oracle.com/technetwork/middleware/weblogic/overview/weblogic-server-docker-containers-2491959.pdf

https://www.linkedin.com/pulse/docker-oracle-fusion-weblogic-setup-vijaya-kumar-reddy-maddela

Friday, 1 July 2016

How to Read JMS Custom Header Properties


Via OSB we have two methods of invocation to JMS queue and topic (i) jms protocol (ii)JCA adapter. But in OSB jms protocol is always preferred.  We usually deals with XML or Non-XML content while working with JMS queue and topic and there are some pre-defined header properties for JMS queue and topic that are used for specific purpose.

Sometime we need to read extra parameters from JMS queue/topic, these extra parameters are custom JMS header properties. We can read custom header property for JMS queue in OSB.

Steps are to create proxy service that will read the message from JMS queue, to create that right click on the folder and click on new and choose proxy service option. Choose “Messaging service” as Service Type.

In Message Type configuration, select “Text” as Request Message Type as we will read simple text message to test this use case and choose “None” as Response Message Type as we are not sending any response back once we read the message from JMS queue.

In Transport tab, choose jms from protocol drop-down as we are dealing with JMS queue i.e. we need to read the message from JMS queue. In Endpoint URI text box fill the required values for JMS endpoint URI, for this use case we use default connection factory and name of JMS queue. Choose default settings for all other tabs and save the business service.

Make sure that “Get All Header” option is set to yes.  If we don’t select this option then we will not be able to read custom header property from JMS queue.

Now to read the JMS custom header property from inbound variable, We can use below syntax via assign activity.

var1 = $inbound/ctx:transport/ctx:request/tp:headers/tp:user-header[@name='QueueDetail']/@value

Here QueueDetail is custom header property name.

Save your proxy service and deploy it to server. Now put message to JMS queue along with custom header property. Proxy picks the message and fetch the custom header property in var1 variable. Which we can print via log activity on server.

Thursday, 28 April 2016

Service Result Caching in OSB


Oracle Service Bus 11g provides built-in cache functionality that utilizes Oracle Coherence, support for many caching strategies.

Oracle Coherence Integration:-

Oracle Service Bus always had some mechanics of caching in order to provide XQuery and XML beans caching, in addition to object caching for Proxy and Business Services classes. But from 11g Release 1 onward, Oracle Service Bus offered an integrated solution for result caching with Oracle Coherence.

The result caching in Oracle Service Bus is available for Business Services. When you enable result caching, the Business Service will not reach the backend service and the response will come from the cache entry. This entry can expire and if the entry is expired, the next service call will indeed reach the backend service and update the cache entry.

 
Activating Cache for Business Service
To enable result caching for our Business Service follow these steps: Open any business service

Expand Advanced Settings and in Result Caching select Supported. Set the Cache Token Expression: fn:data($body/ser:getCustomer/arg0)
 


This will use the customer ID as our unique token for the cache entry. Also, set the Expiration Time for 10 minutes.
Click Last >> and Save to confirm the Business Service modifications. You can activate the change session.
Now test the proxy service

Now the result will show the results immediately, directly from a cache entry in the embedded Oracle Coherence Server running within Oracle Service Bus.  The cache Expiration Time (Time-to-Live) is set to 10 mins, so during this period your calls will not hit the backend Web service.

More complex cache tokens :-
Your desired cache token key may not be as straight forward as selecting the contents of a single, predictably placed node.
 It is worth noting that the Cache Token may be specified as any expression you like, using the XQuery functional programming language.
 XQuery has a rich syntax for constructing queries, including the use of many system libraries with common functions such as numeric operations and string manipulation.
Clear the cache explicitly:-
It might be that you can’t risk the cache becoming stale under certain circumstances. If that is the case, rather than using a time parameter to manage the lifetime of the cached responses, mark critical requests with an additional flag and select XQuery Expression in the expiration time options to test for the existence of this flag.

JCA FTP Adapter limitation in OSB


When we use FTPAdapter with OSB 11g (specifically 11.1.1.6 in our case), Then we can encountered below error in osb_server1.out log file: 

<BEA-000000> <onReject: The resource adapter 'FTP Adapter' requested handling of a malformed inbound message. However, the following activation property has not been defined: 'rejectedMessageHandlers'. Please define it and redeploy. Will use the default Rejection Directory file://jca/Get/rejectedMessages for now.>
We were stuck with the problem of handling large file (Specifically 5 MB file) with OSB. Which was continuously getting polled and deleted from the FTP server and OSB logs was showing the rejection handler error.

Solution:

The default properties in the JCA FTP Adapter limits files to 4MB when polling for inbound messages.
Reason is when the FTP Adapter is used as inbound to receive files from an FTP server, it is possible to limit the size of files by using the payloadSizeThreshold property.
E.g. when the following below endpoint property
<property name="payloadSizeThreshold" type="xs:string" many="false" override="may">50000</property>
then files that are larger than 50000 bytes will be rejected.

This is properly working for all files that are smaller than around 4 MB, but for files larger than that a so-called scalable DOM is created and in this case the payload threshold check is not performed.
To ensure that the payloadSizeThreshold is properly recognized for files of all sizes, ensure that the SupportsScalableDOM property for the FTP Adapter is set to false.
1. Edit the FTPAdapter and add the following property inside the JCA file:
<property name="SupportsScalableDOM" value="false"/>

But changing it and making JCA adapter to poll the large files can result into out of memory error as file size depend on various factor such as (System resources, Load on the server and memory) to process the large files.

ODI is the best tool for processing files having size in GB’s.

How Large payload being handled via oracle soa suite 11g


The following below are the option to handle the large payload reading in SOA suit 11g.

 1) Debatching
 2) Integration with ODI


 1) Debatching : We can configure file adaptor to process the data in batches, for example if we have 1000 records then we can limit it to read 300 records at a time by setting publish record in batches of 300 in file adaptor wizard configuration. Which will create multiple instance for configured batch at run time.

 2) Integration with ODI : Oracle Data Integrator can handle large volume of data, ODI uses mechanism called ELT ( Extract, Load and Transfer). If file goes more than 10MB or so then SOA 11g dealing of files completely depends on what hardware, memory and load handling capacity server have. Keeping in mind SOA process still need for communication to external parties, ODI can be used along with SOA. Below is the example to understand it better.
  a) SOA picks up file as attachment without parsing payload and place into ODI box.
  b) SOA deal with ODI box to process large file by invoking ODI job as webservice.
  c) ODI sends acknowledge back to SOA once file processing over by calling SOA as webservice.

At last debatching introduces multiple instance for one end to end exectuion, So it’s difficult to track and monitor the interface execution specially if want the file to execute in single transaction.
SOA and ODI together provide easy way of processing large volume of data.