Monday, 19 November 2012

Testing the OSB's components - Part I


1. Why is it difficult to test OSB's components?

Testing the components in OSB is not the same task as testing the components of a Java or other programming language. The main difference is that a Java component can be tested by itself but an OSB component usually can't. We need the entire environment (the backend systems / services used by the Service Bus) if we want to test an OSB component.
The other problem that OSB hasn't got any built-in test ability. Our option for testing is to send requests to the proxy service and examine the responses i.e. considering the OSB as a black box.
It is possible to test some component without using the OSB server (e.g. XQuery transformation). Using these components is strongly recommended (sometimes we have got options what type of elements can we use in the proxy service) as we can get rid of many possible future problems by testing these parts of the OSB code in advance.
So we have two main tasks if we want to build a test environment for Oracle Service Bus:
  • building (mocking) the environment of OSB
  • working out how to test the components

2. Mocking the world

It is vital to build the same environment for testing as it exists in the production environment. However we can easily realise that this requirement can be guaranteed for our components but it cannot for the backend systems which can be found on client site (just think about the really huge systems like SAP).

We need these systems for executing our tests. More precisely we need an endpoint which is exactly the same as in the real system (in order to send our request) and this endpoint should send us back a response (in case of synchronous opeartion). The solution is to simulate the systems.

Simulating a system is not so complicated in case of webservices. There are many tools that can be used (I prefer soapUI). If the protocol isn't SOAP then it is more difficult to mock the systems. Sometimes it is not possible to find a proper tool for this work so in this case we have to build a temporary solution (e.g. building a fake interface in Java or in other programming language and deploying it onto an application server if necessary).


It can be seen on the diagram above that we have to group the services in the "mocked world" in a different way than in the real world. It simple means that we will mock all the webserivces with one tool, all the HTTP over XML services with an other tool (or the same) and so on.

I try to give you some idea how to simulate the different kind of services (if someone has got an other idea please let me know):
- SOAP based webservices
      soapUI - www.soapui.org
- REST
      soaprest-mocker - http://sourceforge.net/p/soaprest-mocker/wiki/Home/
      restito - https://github.com/mkotsur/restito
- DB call
      It depends on whether a DB Adapter is used or the built-in OSB function fn-bea:execute-sql. On the second case we have to use a database to execute this function but it doesn't have to be the real database!
- RMI (EJB)
      probably the best thing is to write an EJB with exactly the same siganture but with different body (that doesn't contain the business logic but a simple logic sending back the response)
- HTTP over XML
      soapUI
- CORBA
      CorbaMock - sourceforge.net/projects/corba-mock/

As I mentioned earlier the OSB doen't have a built-in testing ability so it can be tested by sending it requests and examining the received responses. It means that we have to build a lot of testing scenario and a lot of requests for these scenarios. However if we imagine our mocked services as static endpoints then we should create many responses for each mocked backend systems too! So it is crucial to use a tool which is capable of building dynamic responses based on the received requets (e.g. to copy some field from the request to the response, to put timestamps / random values / ... in the responseand so on).

I prefer using soapUI as it can be used for any tasks that can occur at testing or mocking webservice-based services. It is capable of running load tests, mocking services (e.g. building dynamic responses based on the received requests), creating unit tests and so on.


I am afraid that testing is a too big topic to pack into one blog entry so
To Be Continued...

Monday, 6 August 2012

Design influenced by high loading

One of the most important requirement against OSB is the response time. Usually we must prepare to process huge amount of requests. The more requests the OSB has to serve the more time the OSB needs to process one request. After a while the response time is going to be so long that it will decrease the user experience or it won't meet the SLA values or simply we get an error as it has caused a timeout.
It is vital to have standby as well to prepare the unexpected loading. Just imagine that a campaign has been launched to popularise a new product. If this campaign is successful then it can cause a bigger traffic than usual. If we are not prepared to handle these more requests then I think it is really hard to explain to the business users or our boss that the problem is that too many customers want to buy our products... :)

The obvious solution is in this case to add more servers to our OSB environment. Of course there are more possible workarounds too (optimizing code of OSB components or external services) but adding servers is the quickest one (Usually it is pretty difficult to change the existing code and sometimes it is not possible if an external system should be optimized.)
I am not stating that getting and installing a new server takes only minutes. A server could cost a fortune so getting one ore more needs the confirmation of different departments which can last ages. Afterwards it has to be set up, installed, etc.
Nevertheless our OSB components/environment must be prepared for working in cluster environment. We could say that it is not so difficult at all as we put the new server into the Weblogic cluster and the components will be deployed onto the new server as well.


On the picture above our cluster contains seven servers and all the OSB components (proxy services, etc.) are going to be deployed onto every server. It doesn't seem to be a bad idea but we can find out better! Have a look at the next figure:



Why is it better? First of all: this architecture is more flexible, we have got many opportunities to set up the system. Probably the system is going to be quicker too.
However I think I have to explain more detail the diagram above. There are three clusters in this architecture. The cluster C1 uses 4 servers (S4, S5, S6, S7), the cluster C2 uses 3 servers (S1, S2, S3) and the cluster C3 has got 7 servers (S1 - S7).
Let's suppose that we have got 40 proxy services which must be deployed onto the servers. In case of the first architecture there is just one cluster which has got all the servers so we just have to deploy the proxy services onto the cluster and that's it. It is very simple but on the other hand we haven't got any possibility to tune our system. What if we should improve the performance of one service? We can improve the performance of the entire OSB environment by adding new servers as we have got just one cluster. If we have got more clusters which use different servers we should add the new server(s) to that cluster where our service is.
It is important to examine the services and estimate their loading (e.g. how many requests must be served in a day/hour/etc.). We have to find the services which will be heavily used and those ones which are won't. Using the former example 10 of the 40 proxy services are overused the rest of the services are not used so often. We have got 3 clusters so we have options how to arrange the services:

Heavily used services Not often used services
Cluster 1 6 5
Cluster 2 4 5
Cluster 3 0 20

Of course it is not sure that this is the best allocation of the services but this is just an example and the important thing is that WE CAN improve the performance of our OSB environment if it is necessary. What's more we could tune that part of our environment which is really needed to improve so the tuning can be much more effective.

So the base idea is to group the services which must be deployed onto the same cluster. The simplest solution is to do that to create an OSB project for each cluster and put those services into the project which should be deployed onto the cluster. Take a look at the next picture:


There are four projects for the OSB components:
  • cluster_1 : for those services which must be deployed onto the Cluster 1
  • common_components : those artifacts can be put here which might be used in more projects or proxy services (e.g. XSD's, internally used proxy services, etc.)
  • services_COMMON : generally used proxy services
  • business_services : for the business services
I put the business services into a different OSB project as it is possible that one business service will be used by a proxy service that is in Cluster 1 and by an other proxy service that is in Cluster 3. I don't think creating the same business service twice (in the cluster_1 and cluster_3 OSB project) would be a good idea so we have to create it in a separate project.
Considering the future it is better solution to define all the business services in one project and just deploy them to every cluster.

Tuesday, 3 July 2012

Service Facade design pattern in OSB

The Service Facade design pattern is a well-known pattern in Java world. We use it if we want to decouple the business logic and the service interface specifications. We can easily add a new interface to our Java component with the help of this design pattern.
This problem (decoupling the business logic and interfaces) occurs not only in Java components but in case of any middleware components.

Applying this pattern would be vital for our OSB components as well. One of the greatest ability of the OSB is to create middleware components very quickly and easily. However using this design pattern in OSB is not so straightforward at first glance. Please have a look at the next image:


As you can see the service definition is bundled with the message flow. Do we have to copy and paste the message flow if we want to create an additional interface (e.g. for JMS)? Of course we mustn't even think of this solution! The solution is the next:


We have to create a proxy service for the business logic and some other proxy services for the facade interfaces (e.g. if we have to implement a JMS and a webservice interface for our service then we have to create three proxy services).
The only tasks of these facade proxy services are to provide their interfaces and to call the WORKFLOW proxy. We can keep the message flow very simple.


We have to implement the business logic in the WORKFLOW proxy service. The attributes of proxy:
Service type: Any XML Service
Protocol: local
This proxy must be defined as a local service. You can find more information about the local proxies here: http://docs.oracle.com/cd/E23943_01/dev.1111/e15866/local.htm. Now it is enough to know that we can use this type of interface to define common components in OSB.
It is very important that in case of fault we have to use an action 'Reply with Success' otherwise we wouldn't be able to catch the fault in the facade proxy service.

I created the next project/folder structure:



Wednesday, 13 June 2012

Deployment framework

The deployment is a completely different operation than the development. Well I haven't told a big truth... :)
The important statement behind this thought is that usually the developer is not the person who will deploy the components but an other expert who hasn't got any idea about the development and similar stuff so we have to provide a framework for them.

The deployment expert needs the source of OSB components, the environment dependant values (see previous blog) and information about OSB server (e.g. admin user name, password, etc.). They can create an installation package and execute it on the development, test, production, ... server.

What does this installation package consist of? How can it be created? Well this is the reason why we need a framework which can be customized.
The OSB components must be deployed onto different environment:
 - during development we use a local machine or a development server
 - for testing we have to use test servers
 - we deploy the components onto the production servers too
 - ...
In these cases we should get the framework from the version control system, add the source of OSB components to it and customize the property files. This package will be our installation package that we just have to copy to the OSB server or to a computer from where the OSB server is accessible.

And a warning: this is my idea of the deployment framework so it may contain errors or bad ideas. :)

Directory structure

└───deploy_framework
    ├───dist
    ├───env.all
    │   │   build-change.xml
    │   │   change-System.xml
    │   │
    │   └───System
    │       ├───JNDI_Providers
    │       ├───Operator_Settings
    │       ├───Proxy_Servers
    │       ├───SMTP_Servers
    │       └───UDDI
    ├───env.dev
    ├───env.local
    │       build.properties
    │       change.properties
    │       run_all.bat
    │       run_build.bat
    │
    ├───env.prod
    ├───env.test
    ├───libs
    │       xmltask.jar
    │
    ├───scripts
    │       build.xml
    │       import.py
    │
    └───src


dist: The sbconfig.jar (this file will be deployed onto the OSB server) can be found here after building. It will be created by the framework.
env.all: Those settings can be found here which are valid for every environments. They have to be modified on project level but not on a given environment level.
For example: we define here where the OSB component source file should be modified (e.g. XPath definitions).
env.dev: It contains the settings which are valid for the development environment. We can start the deployment onto the development server by executing the file run_***.bat. It may contain the same files as there are in folder env.local.
env.local: It contains the settings which are valid for the local (localhost) environment. We can start the deployment onto the local server by executing the file run_***.bat.
env.prod: It contains the settings which are valid for the production environment. We can start the deployment onto the production server by executing the file run_***.bat. It may contain the same files as there are in folder env.local.
env.test: It contains the settings which are valid for the test environment. We can start the deployment onto the test server by executing the file run_***.bat. It may contain the same files as there are in folder env.local.
libs: JAR files which are needed for the framework.
scripts: Script files (ant, python) which are needed for executing the framework
src: Folder for the source files of OSB components. We have to copy the OSP projects which are in configuration project which we want to deploy and the we have to copy the configuration project itself too.

Property files

The following files must be overridden before the deployment.

Project level settings

build-change.xml: These settings define the values which must be modifed according to the given environment (e.g. we can specify here where the URL of the SMTP server has to be overridden according to value in file changes.properties). These settings must be specified just once. This is an ANT build file in my case. For example:
<project>
    <target name="_change.values">
        <xmltask source="../${env.BUILD_ENVIRONMENT}/tmp/System/SMTP_Servers/EmailServer.SMTPServer" dest="../${env.BUILD_ENVIRONMENT}/tmp/System/SMTP_Servers/EmailServer.SMTPServer">
            <replace path="/xml-fragment/*[local-name(.)='serverURL']/text()" withText="${EmailServer.smtp.Server_URL}"/>
            <replace path="/xml-fragment/*[local-name(.)='portNumber']/text()" withText="${EmailServer.smtp.Port_Number}"/>
        </xmltask>
        ...
    </target>
     ...
 </project>

Environment dependant settings

build.properties: These settings are needed for creating the sbconfig.jar and deploying it onto the server. An example:
middleware.home=E:/Oracle/Middleware
osb.home=${middleware.home}/Oracle_OSB1
wls.username=weblogic
wls.password=weblogic1
wls.server=t3://192.168.1.156:7001
config.project=All-ConfigurationProject
config.jar=E:/work/MAVIR/SOA/deploy/dist/sbconfig.jar
config.subprojects=CommonResources,EmailManager
config.includeDependencies=true
workspace.dir=e:/work/MAVIR/SOA/deploy/src
import.project= None
import.jar=E:/work/MAVIR/SOA/deploy/dist/sbconfig.jar
import.customFile=None

changes.properties: Defining the values which must be written in the OSB source files (e.g. endpoint URLs, usernames, passwords, etc.).
EmailServer.smtp.Server_URL=mailserver.acme.hu
EmailServer.smtp.Port_Number=25
...

run_***.bat: We can start the build/deploy by this scripts.
@ECHO OFF
rem ******************* this value mut be modified *******************
set FMW_HOME=E:\Oracle\Middleware
rem ******************* this value mut be modified *******************
set OSB_HOME_VALUE=Oracle_OSB1
set ANT_HOME=%FMW_HOME%\modules\org.apache.ant_1.7.1
set PATH=%ANT_HOME%\bin;%PATH%
set JAVA_HOME=%FMW_HOME%\jdk160_18
set CLASSPATH=%FMW_HOME%/wlserver_10.3/server/lib/weblogic.jar;%FMW_HOME%/%OSB_HOME_VALUE%/lib/alsb.jar;%FMW_HOME%/%OSB_HOME_VALUE%/modules/com.bea.common.configfwk_1.3.0.0.jar;..\libs\xmltask.jar
SET BUILD_ENVIRONMENT=env.dev
call ant -buildfile ../scripts/build.xml download build change deploy -verbose
pause



We can build and deploy the OSB components automatically by executing the script run_all.bat. It is possible to create the file sbconfig.jar only by executing the script run_build.bat and deploy it manually.

Saturday, 26 May 2012

Deploying OSB components onto different environments

Each OSB expert has already faced the problem of deployment. This task seems to be pretty obvious at first glance but it is quite sophisticated if we dig into the details.
There are many values in the OSB components which are different in each environment (local / development / test / ... / production server). We would like to use the same sourcecode to deploy it into the test server or production server, we want to use a CI tool so it is not possible to use constant values in the source.

The customization file could be a perfect solution for this job ... but it is not. At least it was not the best solution to me.

Let me explain it: I had to send an email in a proxy service. The receiver of the email must be different in the production, the test and the development environments (it is obvious as we wouldn't like to send emails to the business users during tests...). I set the receiver of the email in the message flow by modifying the the Transport Header. We cannot modify this value by the help of the Customization File.
Other example is to change the port number of the SMTP server (you can change the endpoint URL of the SMTP server but not the port number which is a separate property of SMTP server).
There are a lot of other examples too where the Customization File fail so I had to find an other solution to change the values in the proxy services.

Unfortunately I could find any other solution (OSB 11.1.1.3.0) how to change these values. I could work out just one solution: change the XML file of proxy service. It seems to be a sophisticated (or tough ...) solution but you can get used to it. :)
An XML file is created when you produce a proxy service. All you need to do is to find the value in the XML that you want to change and create an XPath to find it in the XML. Afterwards you can change the value with an ant task for example. Unfortunately this is not so easy either...

The first problem is that we don't want to modify the files in the source folder of our local machine. Let's imagine your workspace is in C:\work\OSB\workspace (apologize to the Linux users :) ). You are using your workspace for deploying OSB components into your local OSB domain so I don't think that you'd want to mess that folder.
Besides we need a JAR file as a result with all the environment dependent modifications and we just want to deploy it onto the given domain (see Continuous Integration).
So this is my scenario:
  1. create a deployable JAR file
  2. unzip the JAR file into a temporary folder
  3. change the values according to the given environment
  4. create the JAR files from temporary folder
I used this ANT task at the first step:

<target name="build">
  <delete failonerror="false" includeemptydirs="true" dir="${metadata.dir}"/>
  <java dir="${eclipse.home}"
        jar="${eclipse.home}/plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar"
        fork="true" failonerror="true" maxmemory="768m">
    <jvmarg line="-XX:MaxPermSize=256m"/>   
    <arg line="-data ${workspace.dir}"/>
    <arg line="-application com.bea.alsb.core.ConfigExport"/>
    <arg line="-configProject ${config.project}"/>
    <arg line="-configJar ${config.jar}"/>
    <arg line="-configSubProjects ${config.subprojects}"/>
    <arg line="-includeDependencies ${config.includeDependencies}"/>
    <sysproperty key="weblogic.home" value="${weblogic.home}"/>
    <sysproperty key="osb.home" value="${osb.home}"/>
    <sysproperty key="osgi.bundlefile.limit" value="500"/>
    <sysproperty key="harvester.home" value="${osb.home}/harvester"/>
    <sysproperty key="osgi.nl" value="en_US"/>
    <sysproperty key="sun.lang.ClassLoader.allowArraySyntax" value="true"/>
  </java>
</target>

Performing the second step wasn't very difficult:

<unzip src="${config.jar}" dest="../${env.BUILD_ENVIRONMENT}/tmp"/>

There are more ways for modifying the values unfortunately I chose an inconvenient and sophisticated one. :)

<target name="_change.values">
  <xmltask source="../${env.BUILD_ENVIRONMENT}/tmp/System/SMTP_Servers/EmailServer.SMTPServer" dest="../${env.BUILD_ENVIRONMENT}/tmp/System/SMTP_Servers/EmailServer.SMTPServer">
    <!-- change email server URL -->
    <replace path="/xml-fragment/*[local-name(.)='serverURL']/text()" withText="${EmailServer.smtp.Server_URL}"/>
    <!-- change email server port number -->
    <replace path="/xml-fragment/*[local-name(.)='portNumber']/text()" withText="${EmailServer.smtp.Port_Number}"/>
  </xmltask>
  ...

And I created a properties file and I defined the values.

EmailServer.smtp.Server_URL=XXXXXX
EmailServer.smtp.Port_Number=25

Creating a JAR file is easy too:

<jar destfile="${config.jar}" basedir="../${env.BUILD_ENVIRONMENT}/tmp"/>

Unfortunately I had to implement a 2.1 and 2.2 step as well... When the JAR file was created by the Weblogic ANT task the Configuration Project level resources (e.g. SMTP server) weren't put into the JAR. I don't know it was a problem of the ANT task or I made something badly (but of course it is less likely...:) ).
So I had to copy some folders (see below) into the JAR by the help of ANT and modify the ExportInfo file into the JAR too.


UPDATE
I uploaded my deploy framework that I used to deploy the components, please let me know if you have any issue.

Wednesday, 23 May 2012

The beginnings... - how to organize OSB components

To create and support an OSB platform is not an easy task. Ok it might be more precise to say "it is hard and very difficult". I think I have told nobody a great truth...

You have to address and find good solutions for the next problem domains:
  •  design
  •  develop
  •  deploy
  •  monitoring
  •  management
And we have arrived already ... this is SOA Governance for OSB. But I don't want to begin with this topic now so let's start with the basics: how to desing.

I would like to refine the topic above not how to design but how to make a structure  for OSB components. It doesn't seem to be important but believe me it is! If You've got a lot of services on OSB and You haven't got a well-defined vision about how to arrange your services then ... well You will go crazy after a while.

Let me tell You an example. Suppose that You've got a virgin OSB and You have to develop and deploy a proxy service which will poll an email server. It will send the content of attachment of incoming email to a system via webservice. This email is sent by your customers and contains their orders.
We would think 'it is no problem let's create a folder name it as EmailService and put the OSB proxy service into it'. It is a "good" solution it will work.
After some days our boss will find out to process emails coming from our suppliers using OSB. Perfect, we've got a folder for services polling emails let's put this new proxy services into it as well. Then we have to receive orders from our customers via webservice. And here is the real dilemma: we need to create an other folder for this proxy services? But these two services (receiving orders from customers via email and webservice) belong to the same client... Should we create a new folder and name it as CustomerServices? And there is an other trick here. These services mustn't belong to the customer because the customers here are just the clients of services, the Order Management System must own these services.
I would like to clarify this problem, check the next figure:


As You can see multiple clients try to use the same OSB service so the the service must belong to the Order Management System.
If we consider this idea we will realize that in this aspect the OSB is a part of the source system. Ok it sounds strange but have a look at the next figure:


The OSB is transparent for the clients they just want to call the services of the Order Management System. The OSB can extend the possibilities of the source system so the clients can access it via new protocols, other data structure and so on.

This is my vision about the structures of OSB components:
 
* services_OrderManagementSystem (folder)
       * atomic (folder)
              * order (folder)
                     * webservice (folder)
                            * business (folder)
                                   - BS (business service)
                             * proxy (folder)
                                   - PS (proxy service)
       * domain (folder)
              * order (folder)
                     * email (folder)
                             * proxy (folder)
                                   - PS (proxy service)

* services_UnknownSystem (folder)
       * atomic (folder)
              * unknown_business_logic (folder)
                     * webservice (folder)
                            * business (folder)
                                   - BS (business service)
                             * proxy (folder)
                                   - PS (proxy service)

The systems of the company are on the top level of directory structure. Other name conventions can be used on top level as well e.g. the name of business areas (CRM, billing and so on) but for me it was the best one. As You can see on the figure below these are Oracle Service Bus projects and not folders.
On the second level: Is the service an atomic or domain service?  And what is the difference between atomic and domain services? The simple answer is the complexity. I mean if You want to pass the incoming request to the source system and don't want process it this is an atomic service. But if You need to do something with the input data this is a domain service (e.g. if You have to do protocol transformation then the service mustn't be atomic). It depends on You which one is an atomic or a domain service and actually whether You need this level or You don't.
Third level: I think the services must be distinguished by business logic.  In this sample the client can send orders to source system with the OSB components and it means we use this service managing orders.
The fourth level is the protocol. It can ease the navigation in OSB You can easily find the service which You need.
Last level: the proxy and business services must be put into different folders.


It is up to You to use the levels otherwise. For example change the second and third level because it better for your requirements.

So this is my pattern how to organize the OSB components. It helped me a lot not to get confused.

UPDATE1
********
Nobody can be  experienced enough... :)

So if your services are heavily used (100.000 hits / day) then usually you have to deploy these services onto different services (I am not talking about cluster. In case of clusters all the services can be found on each managed servers.
For example you have got 5 services and 2 of 5 services are on the cluster 1 (which consists of more managed servers) and the remaining services are on cluster 2).

In this case you have to group the services and deploy them onto different servers (if you are using the grouping above you can deploy all the services in a OSB project onto only one server).
It is more reasonable to group services according to your cluster structure. What I mean is:

You want to deploy serviceA and serviceB onto cluster1 and serviceC, serviceD and serviceE onto cluster 2 then:

* cluster_1
       * service_A
       * service_B
* cluster_2
       * service_C
       * service_D
       * service_E

My original idea about grouping is correct if your services are not heavily use and you want to keep together your services in a cluster. However if some of your services are overused then you should think about deploying your services individually.

Friday, 3 February 2012

How to design? - the performance of OSB

What factors do we have to take into consideration in the design phase or architecture planning to ensure the best performance of OSB? I am sure that I won't be able to give a complete answer for this question but just some tricks or thoughts. Not because I don't want to but simply I just don't have suffice knowledge. :)

The performance is important for any system but for the OSB it is really vital. As I mentioned earlier the OSB seems to be a gateway between two or more systems (we know it is not just a gateway but if we consider a communication between 2 systems then it seems to be).
The data don't just flow through the OSB (this is true even if we are thinking of the simplest message) but the OSB parses the the requests/responses XML messages, logs the events, handles some inner handler components (e.g. What to display on the OSB console), save data into the OSB management database and so on. These operations take time and resources.

So we can say that the communication is quicker without than with OSB... This is a hard conclusion but take a look at the next figure:


It is clear that the second communication is quicker than the first one. But the situation is unambiguous if we are talking about pure network communication (e.g. webservice calling).
I know that the mission of OSB is not to accelerate the communication (it can do it in some cases as we will see soon) but I am going to examine the performance factor in this blog.

The business services in OSB can handle a lot of protocols and even we can use many adapters of the Oracle SOA Suite. Using these adapters might be more effective than how any system could handle the communication. So it is more useful to use the OSB to connect two systems than you should develop a new communication way (e.g. 'system A' has to send data to 'system B' but 'system B' can receive data via email so you should develop an email sending module into 'system A').

We can use the Oracle Coherence to cache static data on OSB. If we are sure that the source system sends back static data (not transactional data but product code or date of birth of people) then the OSB (with Oracle Coherence) can send back these information directly without using the source system. Take a look at the next figure:


The OSB (Oracle Coherence) can serve multiple client requests without using the source system. For the first time (when any of clients send its first request) the cache has to be filled. It means that OSB needs to call the source system and put the result from the source system to the cache. Later the cache will call the source system when it must refresh its data (the frequence of the refresh can be set).
We can disencumber the source system with OSB's help so the performance of communication can be optimized as well. We should be careful how we use Oracle Coherence. I mean how often do we have to refresh the cache, which services should we mock with Coherence and so on.

We can protect the source system against overwhelming it with requests. We can limit the number of requests by the throttling function of OSB. This is an attribute group of business services.

Other important possibility of OSB is to alert if there is problem with some services of source systems. We can define SLA alerts for business services and indicate if the service stopped working. We can assign operations to these alerts for sending emails, logging and so on.
Moreover we can assign these SLA alerts to throttling metrics as well so we can predict events (OSB can notify us that a service can serve less requests now than earlier).

The service bus can analyze the request and route it to different systems according to the request parameters. For example there is a date parameter in the request of a query service and if the difference between the actual date and this date is greater than 6 months then the OSB has to call the service of the data warehouse but if it is less then the service bus has to use the service of the transactional system.
And there are a lot of other tricks as well...

And I don't want to talk about how much time and money You can save to use the OSB (considering the cost of starting a new project in a legacy system...). But we are discussing about the performance topic.
You can improve the performance by considering :
- the efficiency of the message flows
- the configuration of OSB
- the configuration of Weblogic

The message flow must be as simple as possible. You mustn't put any unnecessary steps into the message flow as it might slow down the service.
If You have to call administration services (e.g. logging) then You should use asynchronous calls (if possible) not to bother the main message flow. In case of asynchronous call there is not response so the message flow doesn't have to wait for it.
You should consider how many times the service must be called (in case of an error) and how much time the message flow must wait between two calls. You must set the attributes of business services very carefully.
Other effective trick is to use split-join to execute multiple tasks in parallel.