Feed aggregator

GDPR

Pete Finnigan - Thu, 2018-06-07 01:26
I posted a couple of days ago my slides from the recent UKOUG Northern Technology day in Leeds where I spoke about GPPR for the Oracle DBA. I said then that i am also preparing a service line for helping....[Read More]

Posted by Pete On 06/06/18 At 03:10 PM

Categories: Security Blogs

Wireframing or Prototyping: Which One to Use

Nilesh Jethwa - Wed, 2018-06-06 23:08

While clients tend to tell developers to skip wireframing and prototyping, seasoned veterans tell newbies that they can skip wireframing and proceed with prototyping. Experienced developers believe that interactive prototyping isn’t useful when presenting a project. For example, if the … Continue reading ?

Hat Tip To: MockupTiger Wireframes

ODPI-C 2.4 has been released

Christopher Jones - Wed, 2018-06-06 16:44
ODPI-C logo

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Better database notification support. New pool timeout support.

 

I'll keep this brief. See the Release Notes for all changes.

  • Support for Oracle Continuous Query Notification and Advanced Queuing notifications was improved. Notably replacement subscribe and unsubscribe methods were introduced to make use more flexible. Support for handling AQ notifications was added, so now you can get notified there is a message to dequeue. And settings for the listening IP address, for notification grouping, and to let you check the registration status are now available.

  • Some additional timeout options for connection pools were exposed.

  • Some build improvements were made: the SONAME is set in the shared library on *ix platforms. There is also a new Makefile 'install' target that installs using a standard *ix footprint.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Facebook, Google and Custom Authentication in the same Oracle APEX 18.1 app

Dimitri Gielis - Wed, 2018-06-06 15:37
Oracle APEX 18.1 has many new features, one of them is called Social Login.

On the World Cup 2018 Challenge, you can see the implementation of this new feature. The site allows you to sign-up or login with Facebook, Google, and your own email address.


It's even nicer that if you register with your email, but later decide to sign-up with Google or Facebook, it will recognize you as the same user if the email address is the same.

To get the Social Login to work I had to do the following...

Facebook

To enable Facebook login in your own app, you first have to create an app on Facebook. Creating an application is straightforward by following the wizards, just make sure you create a website app.


Google

To enable Google login in your own app, you first have to create a project on Google. Adrian did a really nice blog post which walks you through creating your project and setup Google authentication in your APEX application.




To hook-up Google and Facebook to our own APEX app, we have to let APEX know which credentials it should use, namely the info you find in the previous screenshots.

Web Credentials 

Go to App Builder > Workspace Utilities > All Workspace Utilities and click on the Web Credentials link

I added the Web Credentials for Facebook and Google. Web Credentials store the necessary info (Client ID = App ID and Client Secret = App Secret) of the OAuth2 authentication. OAuth2 is a standard these days which most sites are using to authenticate you as a user. Web Credentials are stored on Workspace Level so you can reuse those credentials in all the APEX apps in the same workspace.


Authentication Scheme 

We need to create the different authentication schemes. The Custom Authentication is to authenticate with email, next we have FACEBOOK, and GOOGLE (and Application Express Authentication which is there by default, but not used in this app).

Custom Authentication Scheme

I blogged before about Create a Custom Authentication and Authorization Scheme in Oracle APEX. The package I use in that blog post is pretty similar to the one of the World Cup app. In the Authentication Scheme, you define the authentication function. I also have a post-authentication procedure that sets some application items.



Facebook Authentication Scheme

Normally the authentication scheme of Facebook would look a bit different as Oracle APEX has built-in Facebook authentication, but for that to work, you need to load the SSL certificate in the Oracle wallet. On the platform the World Cup is running, the database is 12.1 and unfortunately, there's a bug in the database with multi-site or wildcard certificates (which Facebook has). So I had to workaround the issue, but I still used a new feature of APEX 18.1, instead of Facebook Authentication I used Generic OAuth2 Provider.

This is how it looks like:


As we are using the Generic OAuth2 Provider, we have to define the different OAuth URLs manually. When you look at my URLs they look a bit strange...

To get around the SSL issue I set up a reverse proxy in Apache which handles the SSL, so anytime the database does a call to http://apexrnd.localdomain it goes through the reverse proxy.
The reverse proxy in Apache is configured like this:


Note that in Oracle DB 12.2 and above the SSL bug is not there, so you don't need to do the above. I've been using the technique many times before if I don't want to deal with the SSL certificates and configuring the Oracle wallet. Adrian did a post about APEX Social Sign-In without a wallet, which might be of interest if you are on Oracle XE for example.

So what else is happening in the authentication scheme? You have to give the scope of what you want to get back from Facebook. In our case, we use the email as username and for additional attributes, we also want to know the first name, last name and the picture. It's really important you set those additional attributes, otherwise, APEX won't pass the full JSON through and takes a shortcut as it just needs the email.

The User info Endpoint URL is special:
http://apexrnd.localdomain/graph.facebook.com/v2.10/me?fields=#USER_ATTRIBUTES#&access_token=#ACCESS_TOKEN#

Special thanks to Christian of the APEX Dev team, without his help, I wouldn't have figured that one out. Thanks again, Christian!

The next big bit is the post_authenticate procedure which contains the logic to map the Facebook user to the World Cup app user. If it finds the user, it will set some application items again, just like in the custom authentication, but if it doesn't find the user (the first time somebody connects through Facebook), it will create a World Cup user. The most important part of that logic is the part to get the name and picture. Here we parse the JSON the authentication scheme holds in memory.

apex_json.get_varchar2('first_name')
apex_json.get_varchar2('last_name')
apex_json.get_varchar2('picture.data.url')


And then the final bit you have to be careful with, that in the authentication scheme "Switch in Session" is set to "Enabled". This setting is the magic bit to have your APEX application multiple authentication schemes and be able to use one or the other.


Google Authentication Scheme

The Google authentication is simpler than the Facebook one, as we don't have to do the workaround for the certificate as Oracle understands the Google certificate. So here I use the standard APEX 18.1 feature to authenticate against Google. The username attribute is again the email, and the "additional user attribute" is "profile" as that holds the name and picture of the person.


The rest of the authentication scheme is very similar to the one of Facebook. Again don't forget to switch in session to enable.

Login buttons

To call the different authentication schemes on our login page we included different buttons:


The Login button is a normal Submit and will do the Custom Authentication as that is the default authentication (see - Current in Shared Components > Authentication Schemes).

The Facebook button has a Request defined in the link: APEX_AUTHENTICATION=FACEBOOK, this is the way that APEX let you switch authentication schemes on the fly. Very cool! :)


The Google button is similar, but then the request is APEX_AUTHENTICATION=GOOGLE
(note the name after the equal sign needs to be the same as your authentication scheme)


I hope by me showing how the Social Authentication of Oracle APEX 18.1 was implemented in the World Cup 2018 Challenge, it will help you to do the same in your own APEX application.

I really love this new feature of APEX 18.1. The implementation is very elegant, user-friendly and flexible enough to handle most of the OAuth2 authentications out there. Note that Facebook and Google upgrade their APIs to get user info, so depending on when you read this, things might have changed. Facebook is typically backward compatible for a long time, but know that the current implementation in APEX is for API v2.10 and the default Facebook authentication is v3.0. As far as I experienced, the user info didn't change between the API versions. I'll do another blog post how you can debug your authentication as it might help you get other info than the one I got for the World Cup app. Feel free to add a comment if you have any question.
Categories: Development

Oracle SOAR ?!

Dietrich Schroff - Wed, 2018-06-06 14:07
Larry Ellison announced yesterday Oracle SOAR:


Soar [https://en.oxforddictionaries.com/definition/soar]:  
Fly or rise high in the air.
‘the bird spread its wings and soared into the air’
It is about migrating into the cloud [press release]:
the world’s first automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30 percent. By providing a complete set of automated tools and proven cloud transition methodologies, the new “Soar to the Cloud” solution enables customers with applications running on premises to upgrade to Oracle Cloud Applications in as little as 20 weeks.
Oracle does not see a bird - Oracle SOAR is more like a rocket ;-)

But it is not for plain databases or application servers. It is only for E-Business Suite, PeopleSoft and Hyperion:
The Oracle Soar offering is available today for Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning customers who are moving to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Oracle will continue to invest in the development of the product, extending the solution to Oracle PeopleSoft and Oracle E-Business Suite customers moving to Oracle HCM Cloud, and Oracle Siebel customers moving to Oracle CX Cloud in the future.

How to Remove Japanese SEO Spam from your Website ?

iMERGE Group - Wed, 2018-06-06 11:51
Discovering the Hack1. Identify infected pages using Google Search
You can uncover such pages by opening Google Search and searching for:
site:[your site root URL] japan
Navigate through some pages of the search results to see if you discover any suspicious looking URLs. These are the pages indexed by Google containing the word ‘japan’. If you notice pages with the Japanese characters in the title or description, it is likely that your website is infected.
Japanese SEO Spam in Google Search Results

2. Verify with Security Issues Tool in Google Search Console
In your Google Search Console (earlier called Google Webmaster Tools), navigate to the Security Issues Tool in the left sidebar.
Google Search Console Security Issues Tool
3. Fetch as Google to check for ‘Cloaking’
When you visit any of these hacked pages, you might see a 404 not found page suggesting that the web page doesn’t exist. Be careful, the hacker may be using a technique called cloaking. Check for clocking by using the “Fetch as Google” tool in your Google Search Console.
Fixing the Japanese SEO Spam Hack1. Remove any newly created user accounts in the Search Console
If you don’t recognize any users in the “Users and Property Owners” tab, immediately their revoke access. Websites hacked with the Japanese SEO Spam add spammy Gmail accounts as admins so that they can change your site’s settings like sitemaps and geotargeting.
2. Run a Malware Scan
Scan your web server for malware and malicious files using the ‘Virus Scanner’ tool in the cPanel provided by your web host.
3. Check your .htaccess file
Hackers often use the .htaccess file to redirect users and search engines to different malicious pages. Verify the contents of the .htaccess file from a last known clean version of the file from your backups. If you find any suspicious code, comment it out by putting the ‘#’ character in front of the rule.
4. Check Recently Modified Files
Login to your web server via SSH and execute the following command to find the most recently modified files:
find /path-of-www -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort -r
If you are an Astra customer, you would have received an email telling you about malicious file changes.
5. Check your Sitemap
A hacker may modified, or added a new sitemap to get the Japanese SEO Spam pages indexed quickly. If you notice any suspicious links in the sitemap, ensure that you quickly update your CMS core files from a last known clean backup.
 6. Prevent future attacks with a Website Firewall
Another option to prevent the Japanese SEO Spam infections is to use a Website Firwall.


If you are still looking for an expert then feel free to reach out to us at support@ingressit.com and someone from our team will surely help you. 


Categories: DBA Blogs

Wholesale Distributors Build Foundation for Growth

Oracle Press Releases - Wed, 2018-06-06 08:00
Press Release
Wholesale Distributors Build Foundation for Growth Thousands of wholesale distributors including Kitchen Art and Circle Valve Technologies improve business operations with NetSuite

San Mateo, Calif.—Jun 6, 2018

To successfully adapt to a rapidly changing business environment, thousands of wholesale distributors have selected NetSuite to improve business performance, increase customer satisfaction and stay ahead of the competition. For example, Kitchen Art, one of the largest distributors of kitchen cabinets in the southeastern United States, and Circle Valve, a distributor of industrial valves, fittings, filters, and measurement control devices, selected SuiteSuccess for Wholesale Distribution and implemented NetSuite to streamline the management of financials, inventory and reporting.

Kitchen Art Improves its Business Systems with NetSuite

Started in 1989, Kitchen Art provides design and installation services out of its three offices in Florida through more than 900 job sites a month. To successfully grow its business and manage increasing complexity, Kitchen Art selected NetSuite SuiteSuccess for Wholesale Distribution in September 2016. With SuiteSuccess, Kitchen Art is able to quickly and easily track orders, inventory, finances and operations, while also customizing business-specific workflows across processes like product selection and order entry. As a result, the Kitchen Art management team has been able to improve decision making and drive efficiencies by gaining unprecedented visibility into business operations.

“As we continue to achieve great success in our industry, NetSuite is helping us expand even further in Florida and the southeast US,” said Rick Cuseo, VP of Finance, Kitchen Art. “NetSuite, along with our outstanding employees and valued customers, are the secret sauce that help to fuel our growth.”

  • Circle Valve Prepares for Next Stage of Growth with NetSuite

Founded by two friends in 1986, Circle Valve Technologies has grown into a world class supplier of precision valves, fittings and controls. Today, the company operates an 8,000-square-foot facility that houses roughly $1.2 million in inventory. To continue to expand its business, Circle Valve Technologies needed a strong business management system. After a careful evaluation, Circle Valve Technologies selected NetSuite SuiteSuccess for Wholesale Distribution in August 2016 for its comprehensive functionality, cloud-based architecture and ease of implementation.

“Circle Valve Technologies has been built on strong technical expertise, inventory management and customer service,” said Chris Simmons, General Manager, Circle Valve Technologies. “To support our next stage of growth, we needed a new business management system and NetSuite has been phenomenal. It’s everything we need all in one system. The amount of detail I can get from a customer record just by clicking the dashboard view is staggering.”

For more information on SuiteSuccess for Wholesale Distribution, visit: www.netsuite.com/suitesuccesswd.

Contact Info
Michael Robinson
Oracle NetSuite Corporate Communications
781-974-9401
michael.s.robinson@oracle.com
About Kitchen Art

Since 1989, Kitchen Art has set the standard for high-end custom kitchen cabinetry design and installation, and quality remodeling. To learn more visit http://kitchenartdesigncenter.com.

About Circle Value Technologies

Circle Valve is a value-added distributor and manufacture rep of industrial valves, fittings, filters and measurement control devices for a variety of industries and applications. To learn more visit https://www.circlevalve.com.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Michael Robinson

  • 781-974-9401

interpret trace file

Tom Kyte - Wed, 2018-06-06 07:26
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Windows NT Version V6.2 CPU : 12 - type 8664, 6 Physical Cores Process Af...
Categories: DBA Blogs

SQL statements inside AWR views

Tom Kyte - Wed, 2018-06-06 07:26
Hi Tom, I'm trying to find the history of executions for a given sql statement by using AWR views like DBA_HIST_ACTIVE_SESS_HISTORY, DBA_HIST_SQLSTAT, DBA_HIST_SQLTEXT, DBA_HIST_SQL_PLAN etc. I find the SQL_IDs for SQL by querying DBA_HIST_SQLT...
Categories: DBA Blogs

Display master child data as a set - from 2 different tables

Tom Kyte - Wed, 2018-06-06 07:26
Hi , I will be glad if you could help me in this. I have a Parent Table ( ORDER_HEADER ) and a Child table ( ORDER_LINE ). They are linked by order_id. ORDER_HEADER holds order details for customers and ORDER_LINE holds the child lines for ea...
Categories: DBA Blogs

Parsing a CLOB field with CSV data and put the contents into it proper fields

Tom Kyte - Wed, 2018-06-06 07:26
My Question is a variation of the one originally posted on 11/9/2015. parsing a CLOB field which contains CSV data. My delimited data that is loaded into a clob field contains output in the form attribute=data~ The clob field can contain up to 6...
Categories: DBA Blogs

process limit and sessions

Tom Kyte - Wed, 2018-06-06 07:26
Hi there, For the past few days,during the peak hours i am getting the following error in a frequent way " Listener refused the connection with the following error: ORA-12519, TNS:no appropriate service handler found " I googled it first,...
Categories: DBA Blogs

Which is default if session is interrupted - COMMIT or ROLLBACK?

Tom Kyte - Wed, 2018-06-06 07:26
Hi Tom, I use Oracle SQL Developer. I do not use autocommit and in normal situation I always use after some block of SQL commands (INSERT, DELETE etc.) COMMIT, or ROLLBACK. But what does happen if I do not finish this block of SQL command in...
Categories: DBA Blogs

Dealing with ugrade scenarios for SQL Server on Docker and Swarm

Yann Neuhaus - Wed, 2018-06-06 03:49

This blog post comes from an interesting experience with one customer about upgrading SQL Server on a Docker environment. Let’s set quickly the context: a production environment that includes a standalone Docker engine on the top of openSUSE Linux distribution with some SQL Server 2017 Linux containers. The deal was to update those SQL Server instances from 2017 CU1 to 2017 CU7.

blog 137 - 0 - banner update sql docker

The point is we applied the same kind of upgrades earlier on the morning but it concerned virtual environments with SQL Server 2017 on Windows Server 2016. As you already guessed, we cannot go on the same way with SQL Server containers. The good news is that the procedure is fully documented by Microsoft but let’s focus on my customer’s question: Can we achieve rolling upgrades with SQL Server containers? Rolling upgrade in this context may be defined as keeping the existing system running while we each component. Referring to the context of this customer the quick answer is no because they manage the standalone instances. However, we may hope to be as closed as possible to the existing rolling upgrade scenarios with SQL Server HA capabilities but it is pretty limited currently on Docker and didn’t make sense in this specific context.

In addition, let’s say that my customer spins up SQL Server containers by running docker run command. In this case, we had no choice to re-create the concerned containers with the new image. So basically, according to the Microsoft documentation, the game will consist of the following main steps:

  • To download the latest SQL Server image from the Microsoft docker hub.
  • To ensure we are using persistent volumes with SQL Server containers.
  • To Initiate DB user backups (Keep safe here)
  • To remove the concerned container
  • To re-create the container with the same definition but the upgraded based image

The aforementioned steps will lead to some SQL Server instance unavailability.

Let’s simulate this scenario on my lab environment with a custom image (but the principle remains the same as my customer).

[clustadmin@docker1 PROD]$ docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                    PORTS                    NAMES
aa6b4411e4e4        dbi/dbi_linux_sql2017:CU4   "./entrypoint.sh 'ta…"   22 minutes ago      Up 22 minutes (healthy)   0.0.0.0:1433->1433/tcp   prod_db_1
153b1cc0bbe0        registry:2                  "/entrypoint.sh /etc…"   6 weeks ago         Up About an hour          5000/tcp                 registry.1.pevhlfmsailjx889ktats1fnh

 

The first docker container concerns my SQL Server instance with a custom base image dbi/dbi_linux_sql2017:CU4. My environment includes also one user database dbi_tools and some related external objects as SQL Server jobs and additional logins – dbi and test logins. A simplified representation of my customer scenario …

blog 137 - 0 - mssql-cli sql version

blog 137 - 2 - mssql-cli sql login

blog 137 - 1 - mssql-cli user db

So, the challenge here is to upgrade the current container running on SQL Server 2017 CU4 with the last SQL Server 2017 CU7. The first step will consist in upgrading the dbi/dbi_linuxsql2017:CU4 image. Note I will use docker-compose in the next part of my demo but we’ll achieve exactly the same goal differently. So, let’s modify the FROM command line inside the docker file line as follows:

# set the base image (2017 version)
# > CU4 includes SQL Server agent now
FROM microsoft/mssql-server-linux:2017-CU7

 

Then let’s run a docker-compose command with the following docker-compose input file in order to generate a new fresh SQL Server image (CU7). The interesting part of my docker-compose file:

version: '3.1'
services:
  db: 
    build: .
    image: dbi/dbi_linux_sql2017:CU7
…
[clustadmin@docker1 PROD]$ docker-compose build

 

Let’s take a look at the existing docker images:

[clustadmin@docker1 PROD]$ docker images
REPOSITORY                                   TAG                    IMAGE ID            CREATED             SIZE
dbi/dbi_linux_sql2017                        CU7                    0b4d23626dae        44 minutes ago      1.43GB
dbi/dbi_linux_sql2017                        CU4                    0db4694645ec        About an hour ago   1.42GB
…

 

My new image has been created successfully. We may also notice 2 images now: the current one (with CU4 tag) and the new one (with CU7 tag)

Obviously persistent volumes are an important part of the customer architecture, so I also simulated some volume mappings inside my docker-compose file.

version: '3.1'
services:
  db:
    build: .
    image: dbi/dbi_linux_sql2017:CU7
    #container_name: dbi_linux_sql2017_cu4
    ports:
      - "1433:1433"
    volumes:
      - /u00/db2:/u00
      - /u01/db2:/u01
      - /u02/db2:/u02
      - /u03/db2:/u03
      - /u98/db2:/u98
environment:
      - MSSQL_SA_PASSWORD=Password1
      - ACCEPT_EULA=Y
      - MSSQL_PID=Developer
      - MSSQL_USER=dbi
      - MSSQL_USER_PASSWORD=Password1
      - TZ=Europe/Berlin

 

Let’s move forward to the next step that consists in removing the current SQL Server 2017 CU4 container (prod_db_1):

[clustadmin@docker1 PROD]$ docker stop prod_db_1 && docker rm prod_db_1
prod_db_1
prod_db_1

 

And finally let’s spin up a new container based on the new image (SQL Server 2017 CU7)

[clustadmin@docker1 PROD]$ docker-compose up -d

 

Just out of curiosity, a quick look at the docker log output command reveals some related records concerning the upgrade process:

2018-06-04 22:45:43.79 spid7s      Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install.
2018-06-04 22:45:43.80 spid7s
2018-06-04 22:45:43.80 spid7s      -----------------------------------------
2018-06-04 22:45:43.80 spid7s      Execution of PRE_SQLAGENT100.SQL complete
2018-06-04 22:45:43.80 spid7s      -----------------------------------------
2018-06-04 22:45:43.81 spid7s      DMF pre-upgrade steps...
2018-06-04 22:45:44.09 spid7s      DC pre-upgrade steps...
2018-06-04 22:45:44.09 spid7s      Check if Data collector config table exists...
…
2018-06-04 22:45:59.39 spid7s      ------------------------------------
2018-06-04 22:45:59.39 spid7s      Execution of InstDac.SQL complete
2018-06-04 22:45:59.39 spid7s      ------------------------------------
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      Starting execution of EXTENSIBILITY.SQL
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      -----------------------------------------
2018-06-04 22:45:59.40 spid7s      Finished execution of EXTENSIBILITY.SQL
2018-06-04 22:45:59.41 spid7s      -----------------------------------------
2018-06-04 22:45:59.44 spid7s      Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.44 spid7s      Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.45 spid7s      Configuration option 'Agent XPs' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.45 spid7s      Configuration option 'Agent XPs' changed from 1 to 1. Run the RECONFIGURE statement to install.
2018-06-04 22:45:59.53 spid7s      Dropping view [dbo].[sysutility_ucp_configuration]
2018-06-04 22:45:59.53 spid7s      Creating view [dbo].[sysutility_ucp_configuration]...
2018-06-04 22:45:59.54 spid7s      Dropping view [dbo].[sysutility_ucp_policy_configuration]
2018-06-04 22:45:59.54 spid7s      Creating view [dbo].[sysutility_ucp_policy_configuration]...
2018-06-04 22:45:59.55 spid7s      Dropping [dbo].[fn_sysutility_get_is_instance_ucp]
….

 

The container has restarted correctly with the new based image as show below:

[clustadmin@docker1 PROD]$ docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                             PORTS                    NAMES
a17800f70fff        dbi/dbi_linux_sql2017:CU7   "./entrypoint.sh 'ta…"   4 minutes ago       Up 33 seconds (health: starting)   0.0.0.0:1433->1433/tcp   prod_db_1
153b1cc0bbe0        registry:2                  "/entrypoint.sh /etc…"   6 weeks ago         Up 2 hours                         5000/tcp                 registry.1.pevhlfmsailjx889ktats1fnh

 

Let’s check the new SQL Server version and if all my objects are still present:

blog 137 - 3 - mssql-cli sql version

The upgrade seems to be done successfully and all existing objects previous the upgrade operation still exist:

blog 137 - 4 - mssql-cli sql objects

 

Great job! But let’s go beyond to this procedure with the following question: Could we have done better here? From a process perspective, the response is probably yes but we have to rely on more sophisticated features provided by Swarm mode (or other orchestrators as K8s) as service deployment that will make the upgrade procedure drastically easier. But don’t get me wrong here. Even in Swarm mode or other orchestrators, we are not still able to guarantee the zero down time but we may perform the upgrade faster to be very close to the target.

Previously I used docker-compose to spin up my SQL Server container. Now let’s use this counterpart on a Docker Swarm environment.

[clustadmin@docker1 PROD]$ docker info | grep -i swarm
Swarm: active

[clustadmin@docker1 PROD]$ docker node ls
ID                            HOSTNAME                    STATUS              AVAILABILITY        MANAGER STATUS
hzwjpb9rtstwfex3zsbdnn5yo *   docker1.dbi-services.test   Ready               Active              Leader
q09k7pqe940qvv4c1jprzk2yv     docker2.dbi-services.test   Ready               Active
c9burq3qn4iwwbk28wrpikqra     docker3.dbi-services.test   Ready               Active

 

I already prepared a stack deployment that includes a task related to my SQL Server instance (2017 CU4):

[clustadmin@docker1 PROD]$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                       PORTS
2bmomzq0inu8        dbi_db              replicated          1/1                 dbi/dbi_linux_sql2017:CU4   *:1433->1433/tcp
[clustadmin@docker1 PROD]$ docker service ps dbi_db
ID                  NAME                IMAGE                       NODE                        DESIRED STATE       CURRENT STATE           ERROR               PORTS
rbtbkcz0cy8o        dbi_db.1            dbi/dbi_linux_sql2017:CU4   docker1.dbi-services.test   Running

 

A quick connection to the concerned SQL Server instance confirms we run on SQL Server 2017 CU4:

blog 137 - 5- mssql-cli sql version swarm

Now go ahead and let’s perform the same upgrade we’ve done previously (2017 CU7). In this case the game will consist in updating the corresponding docker-compose file with the new image as follows (I put only the interesting sample of my docker-compose file):

version: '3.1'
services:
  db: 
    build: .
    image: dbi/dbi_linux_sql2017:CU7
…

 

… and then I just have give the new definition of my docker-compose file as input of my stack deployment as follows:

[clustadmin@docker1 PROD]$ docker stack deploy -c docker-compose.yml dbi
Ignoring unsupported options: build

Updating service dbi_db (id: 2bmomzq0inu8q0mwkfff8apm7)
…

 

The system will then perform all the step we previously performed manually in the first test including stopping the old task (container), scheduling the old task’s update with the new image and finally starting the updated container as shown below:

[clustadmin@docker1 PROD]$ docker service ps dbi_db
ID                  NAME                IMAGE                                  NODE                        DESIRED STATE       CURRENT STATE             ERROR               PORTS
4zey68lh1gin        dbi_db.1            127.0.0.1:5000/dbi_linux_sql2017:CU7   docker1.dbi-services.test   Running             Starting 40 seconds ago
rbtbkcz0cy8o         \_ dbi_db.1        dbi/dbi_linux_sql2017:CU4              docker1.dbi-services.test   Shutdown            Shutdown 40 seconds ago

 

A quick check of my new SQL Server version:

blog 137 - 51- mssql-cli sql version swarm

That’s it!

In this blog post, I hope I managed to get you interested in using swarm mode in such case. Next time I will talk about SQL Server upgrade scenarios on K8s that is a little bit different.

Stay tuned!

 

 

 

 

 

Cet article Dealing with ugrade scenarios for SQL Server on Docker and Swarm est apparu en premier sur Blog dbi services.

DBaaS Monitor in Oracle Database Cloud (DBCS) is now replaced with SQL Developer Web (SDW)

Online Apps DBA - Wed, 2018-06-06 03:29

DBaaS Monitor in Oracle Database Cloud (DBCS) is now replaced with SQL Developer Web.Confused Why so..?[BLOG] DBaaS Monitor in Oracle Database Cloud (DBCS) is now replaced with SQL Developer Web (SDW)Get answers to all your queries regarding new updates for Oracle Database Cloud Service (DBCS) users and add points to your knowledge Click Down Below http://k21academy.com/clouddba28 and learn […]

The post DBaaS Monitor in Oracle Database Cloud (DBCS) is now replaced with SQL Developer Web (SDW) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7

Yann Neuhaus - Wed, 2018-06-06 01:30

As you might already know PostgreSQL 11 will bring support for just-in-time compilation. When you want to compile PostgreSQL 11 with jit support on RedHat/CentOS 7 this requires a little hack (more on the reason below). In this post we’ll look at how you can do it at least for testing. For production it is of course not recommended as hacking the make file is nothing you want to do, at least I would not do it. Lets go.

As mentioned I am on CentOS 7:

postgres@pgbox:$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core) 

What you need to get support for jit is llvm. When you check the CentOS repository llvm is there:

postgres@pgbox:$ yum search llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch
 * extras: mirror.spreitzer.ch
 * updates: mirror.spreitzer.ch
===================================================== N/S matched: llvm =====================================================
llvm-devel.x86_64 : Libraries and header files for LLVM
llvm-doc.noarch : Documentation for LLVM
llvm-libs.x86_64 : LLVM shared libraries
llvm-ocaml.x86_64 : OCaml binding for LLVM
llvm-ocaml-devel.x86_64 : Development files for llvm-ocaml
llvm-ocaml-doc.noarch : Documentation for LLVM's OCaml binding
llvm-private.i686 : llvm engine for Mesa
llvm-private.x86_64 : llvm engine for Mesa
llvm-private-devel.i686 : Libraries and header files for LLVM
llvm-private-devel.x86_64 : Libraries and header files for LLVM
llvm-static.x86_64 : LLVM static libraries
mesa-private-llvm.i686 : llvm engine for Mesa
mesa-private-llvm.x86_64 : llvm engine for Mesa
mesa-private-llvm-devel.i686 : Libraries and header files for LLVM
mesa-private-llvm-devel.x86_64 : Libraries and header files for LLVM
clang.x86_64 : A C language family front-end for LLVM
llvm.x86_64 : The Low Level Virtual Machine

  Name and summary matches only, use "search all" for everything.

The issue is that the PostgreSQL documentation clearly states that llvm needs to be at least of version 3.9 and in the CentOS repository you’ll find this:

postgres@pgbox:$ yum info llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror1.hs-esslingen.de
 * extras: mirror.fra10.de.leaseweb.net
 * updates: mirror.netcologne.de
Available Packages
Name        : llvm
Arch        : x86_64
Version     : 3.4.2
Release     : 8.el7
Size        : 1.3 M
Repo        : extras/7/x86_64
Summary     : The Low Level Virtual Machine
URL         : http://llvm.org/
License     : NCSA
Description : LLVM is a compiler infrastructure designed for compile-time,
            : link-time, runtime, and idle-time optimization of programs from
            : arbitrary programming languages.  The compiler infrastructure includes
            : mirror sets of programming tools as well as libraries with equivalent
            : functionality.

What to do? What you need to do is to add the epel repository where you can find llvm 3.9 and 5.0:

postgres@pgbox:$ wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum localinstall epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum install llvm5.0 llvm5.0-devel clang

Having that we should be ready for configuration:

postgres@pgbox:$ PGHOME=/u01/app/postgres/product/11/db_0
postgres@pgbox:$ SEGSIZE=1
postgres@pgbox:$ BLOCKSIZE=8
postgres@pgbox:$ WALSEGSIZE=16
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 

That succeeds so lets compile:

postgres@pgbox:$ make -j 4 all

… and you will run into this issue:

/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2  -I../../src/include  -D_GNU_SOURCE  -flto=thin -emit-llvm -c -o localtime.bc localtime.c
clang: error: unknown argument: '-flto=thin'
make[2]: *** [localtime.bc] Error 1
make[2]: Leaving directory `/home/postgres/postgresql/src/timezone'
make[1]: *** [all-timezone-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

There is a mail thread on the hackers mailing list which describes the issue. Apparently the clang compiler is too old to understand this argument. What you could do is to adjust the corresponding line in the Makefile:

postgres@pgbox:$ vi src/Makefile.global.in
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -emit-llvm -c

Doing all the stuff again afterwards:

postgres@pgbox:$ make clean
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 
postgres@pgbox:$ make -j 4 all

… led to the following issue (at least for me):

make[2]: g++: Command not found
make[2]: *** [llvmjit_error.o] Error 127
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory `/home/postgres/postgresql/src/backend/jit/llvm'
make[1]: *** [all-backend/jit/llvm-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

This should be easy to fix:

postgres@pgbox:$ sudo yum install -y gcc-c++
postgres@pgbox:$ which g++
/bin/g++

Again:

postgres@pgbox:$ make -j 4 install
postgres@pgbox:$ cd contrib
postgres@pgbox:$ make -j 4 install

… and this time it succeeds (note that I did not run the regression tests, so maybe something will still go wrong there).

JIT is enabled by default and controlled by these parameters:

postgres=# select name,setting from pg_settings where name like 'jit%';
          name           | setting 
-------------------------+---------
 jit                     | on
 jit_above_cost          | 100000
 jit_debugging_support   | off
 jit_dump_bitcode        | off
 jit_expressions         | on
 jit_inline_above_cost   | 500000
 jit_optimize_above_cost | 500000
 jit_profiling_support   | off
 jit_provider            | llvmjit
 jit_tuple_deforming     | on
(10 rows)

To test that it really kicks in you can do something like this:

postgres=#create table ttt (a int, b text, c date );
postgres=#insert into ttt (a,b,c)
           select aa.*, md5(aa::text), now()
             from generate_series(1,1000000) aa;
postgres=#set jit_above_cost=5;
postgres=#set jit_optimize_above_cost=5;
postgres=#set jit_inline_above_cost=5;
postgres=#explain select sum(a) from ttt;

… which should lead to a plan like this:

                                      QUERY PLAN                                       
---------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=15554.55..15554.56 rows=1 width=8)
   ->  Gather  (cost=15554.33..15554.54 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=14554.33..14554.34 rows=1 width=8)
               ->  Parallel Seq Scan on ttt  (cost=0.00..13512.67 rows=416667 width=4)
 JIT:
   Functions: 8
   Inlining: true
   Optimization: true
(9 rows)

Hope that helps.

 

Cet article How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7 est apparu en premier sur Blog dbi services.

Composite Range-Hash interval partitioning with LOB

Tom Kyte - Tue, 2018-06-05 13:06
Hi, I would like to partition a table with a LOB column using Range-Hash interval partitioning scheme. But I not sure how the exact partition gets distributed in this scenario and also I noticed following differences based on how I specify my p...
Categories: DBA Blogs

Larry Ellison Debuts Automated Oracle Offering that Dramatically Cuts Cloud Upgrade Costs

Oracle Press Releases - Tue, 2018-06-05 13:00
Press Release
Larry Ellison Debuts Automated Oracle Offering that Dramatically Cuts Cloud Upgrade Costs Oracle’s automated cloud migration solution enables customers to reduce upgrade costs, accelerate time to value and improve delivery predictability

Redwood Shores, Calif.—Jun 5, 2018

Oracle (NYSE: ORCL) CTO and Chairman, Larry Ellison, today unveiled the world’s first automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30 percent. By providing a complete set of automated tools and proven cloud transition methodologies, the new “Soar to the Cloud” solution enables customers with applications running on premises to upgrade to Oracle Cloud Applications in as little as 20 weeks.

“It’s now easier to move from Oracle E-Business Suite to Oracle Fusion ERP in the cloud, than it is to upgrade from one version of E-Business Suite to another,” said Ellison. “A lot of tedious transitions that people once did manually are now automated. If you choose Oracle Soar, it will be the last upgrade you’ll ever do.”

Oracle Soar includes a discovery assessment, process analyzer, automated data and configuration migration utilities, and rapid integration tools. The automated process is powered by the True Cloud Method, Oracle’s proprietary approach to support customers throughout the journey to the cloud. It is guided by a dedicated Oracle concierge service to help ensure a rapid and predictable upgrade that aligns with modern, industry best practices. Customers can keep the upgrade on-track by monitoring the status of their cloud transition via an intuitive mobile application, which features a step-by-step implementation guide indicating exactly what needs to be done each day.

“We know the power of automation in solving business problems for our customers—it’s baked into all of our applications,” said Beth Boettcher, senior vice president, North American applications consulting, Oracle. “We’ve applied the same thinking to the cloud upgrade process to create an end-to-end solution that will enable our customers to experience a rapid, predictable, and cost-effective journey to the cloud.”

Oracle Soar to the Cloud

Leading organizations are choosing Oracle Cloud:

 

The Oracle Soar offering is available today for Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning customers who are moving to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Oracle will continue to invest in the development of the product, extending the solution to Oracle PeopleSoft and Oracle E-Business Suite customers moving to Oracle HCM Cloud, and Oracle Siebel customers moving to Oracle CX Cloud in the future.

Additional Information

For additional information on Oracle Soar, visit oracle.com/soar

Contact Info
Bill Rundle
Oracle
650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The above is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX

Amis Blog - Tue, 2018-06-05 13:00

In a previous article I talked about an OSBServiceExplorer tool to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. The code mentioned in that article was based on Oracle Service Bus 11.1.1.7 (11g).

In the meantime the OSB world has changed (for example now we can use pipelines) and it was time for me to pick up the old code and get it working within Oracle Service Bus 12.2.1.1.0 (12c).

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX in an 12c environment.

Unfortunately, getting the java code to work in 12c wasn’t as straightforward as I hoped.

For more details on the OSB, WebLogic Server MBeans and JMX subject, I kindly refer you to my previous article. In this article I will refer to it as my previous MBeans 11g article.
[https://technology.amis.nl/2017/03/09/oracle-service-bus-service-exploring-via-weblogic-server-mbeans-with-jmx/]

Before using the OSBServiceExplorer tool in an 12c environment, I first created two OSB Projects (MusicService and TrackService) with pipelines, proxy and business services. I used Oracle JDeveloper 12c (12.2.1.1.0) for this (from within a VirtualBox appliance).

For the latest version of Oracle Service Bus see:
http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html

If you want to use a VirtualBox appliance, have a look at for example: Pre-built Virtual Machine for SOA Suite 12.2.1.3.0
[http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa122130-4122735.html]

After deploying the OSB Projects that were created in JDeveloper, to the WebLogic server, the Oracle Service Bus Console 12c (in my case: http://localhost:7101/servicebus) looks like:

Before we dive into the OSBServiceExplorer tool , first I give you some detail information of the “TrackService” (from JDeveloper), that will be used as an example in this article.

The “TrackService” sboverview looks like:

As you can see, several proxy services, a pipeline and a business service are present.

The Message Flow of pipeline “TrackServicePipeline” looks like:

The OSB Project structure of service “TrackService” looks like:

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
– Server name: DefaultServer. Server state: RUNNING

For more info and the responsible code fragment see my previous MBeans 11g article.

List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 45 refs, including the following pipelines, proxy and business services:
– ProxyService: TrackService/proxy/TrackServiceRest
– BusinessService: MusicService/business/db_InsertCD
– BusinessService: TrackService/business/CDService
– Pipeline: TrackService/pipeline/TrackServicePipeline
– ProxyService: TrackService/proxy/TrackService
– Pipeline: MusicService/pipeline/MusicServicePipeline
– ProxyService: MusicService/proxy/MusicService
– ProxyService: TrackService/proxy/TrackServiceRestJSON

See the code fragment below (I highlighted the changes I made on the code from the 11g version):

Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

fileWriter.write("Found total of " + refs.size() +
                 " refs, including the following pipelines, proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {
        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("Pipeline")) {
        fileWriter.write("- Pipeline: " +
                         ref.getFullName() + "\n");                    
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " +
                         ref.getFullName() + "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

For more info see my previous MBeans 11g article.

ResourceConfigurationMBean

In the Oracle Enterprise Manager FMC 12c (in my case: http://localhost:7101/em) I navigated to SOA / service-bus and opened the System MBean Browser:

Here the ResourceConfigurationMBean’s can be found under com.oracle.osb.


[Via MBean Browser]

If we navigate to a particular ResourceConfigurationMBean for a proxy service (for example …$proxy$TrackService), the information on the right is as follows :


[Via MBean Browser]

As in the 11g version the attributes Configuration, Metadata and Name are available.

If we navigate to a particular ResourceConfigurationMBean for a pipeline (for example …$pipeline$TrackServicePipeline), the information on the right is as follows :


[Via MBean Browser]

As you can see the value for attribute “Configuration” for this pipeline is “Unavailable”.

Remember the following java code in OSBServiceExplorer.java (see my previous MBeans 11g article):

for (ObjectName osbResourceConfiguration :
    osbResourceConfigurations) {
 
    CompositeDataSupport configuration =
        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                      "Configuration");

So now apparently, getting the configuration can result in a NullPointerException. This has to be dealt with in the new 12c version of OSBServiceExplorer.java, besides the fact that now also a pipeline is a new resource type.

But of course for our OSB service explorer we are in particular, interested in the elements (nodes) of the pipeline. In order to get this information available in the System MBean Browser, something has to be done.

Via the Oracle Enterprise Manager FMC 12c I navigated to SOA / service-bus / Home / Projects / TrackService and clicked on tab “Operations”:

Here you can see the Operations settings of this particular service.

Next I clicked on the pipeline “TrackServicePipeline”, where I enabled “Monitoring”

If we then navigate back to the ResourceConfigurationMBean for pipeline “TrackServicePipeline”, the information on the right is as follows:


[Via MBean Browser]

So now the wanted configuration information is available.

Remark:
For the pipeline “MusicServicePipeline” the monitoring is still disabled, so the configuration is still unavailabe.

Diving into attribute Configuration of the ResourceConfigurationMBean

For each found pipeline, proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

Proxy service configuration:
Please see my previous MBeans 11g article.

Business service configuration:
Please see my previous MBeans 11g article.

Pipeline configuration:
Below is an example of a pipeline configuration (content fragment of the text file):

Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

If the pipeline configuration is unavailable, the following is shown:

Resource is a Pipeline (without available Configuration)

The pipelines can be recognized by the Pipeline$ prefix.

Pipeline, element hierarchy

In the 11g version of OSBServiceExplorer.java, for a proxy service the elements (nodes) of the pipeline were investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");

In 12c however this doesn’t work for a proxy service. The same code can be used however for a pipeline.

For pipeline “TrackServicePipeline”, the configuration (including nodes) looks like:


[Via MBean Browser]

Based on the nodes information (with node-id) in the MBean Browser and the content of pipeline “TrackServicePipeline.pipeline” the following structure can be put together:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .pipeline file for the _ActiondId- identification, mentioned as value for the name key.

Example of the details of node with node-id = 4 and name = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc:


[Via MBean Browser]

Content of pipeline “TrackServicePipeline.pipeline”:

<?xml version="1.0" encoding="UTF-8"?>
<con:pipelineEntry xmlns:con="http://www.bea.com/wli/sb/pipeline/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:con1="http://www.bea.com/wli/sb/stages/config" xmlns:con2="http://www.bea.com/wli/sb/stages/routing/config" xmlns:con3="http://www.bea.com/wli/sb/stages/transform/config">
    <con:coreEntry>
        <con:binding type="SOAP" isSoap12="false" xsi:type="con:SoapBindingType">
            <con:wsdl ref="TrackService/proxy/TrackService"/>
            <con:binding>
                <con:name>TrackServiceBinding</con:name>
                <con:namespace>http://trackservice.services.soatraining.amis/</con:namespace>
            </con:binding>
        </con:binding>
        <con:xqConfiguration>
            <con:snippetVersion>1.0</con:snippetVersion>
        </con:xqConfiguration>
    </con:coreEntry>
    <con:router>
        <con:flow>
            <con:route-node name="RouteNode1">
                <con:context>
                    <con1:userNsDecl prefix="trac" namespace="http://trackservice.services.soatraining.amis/"/>
                </con:context>
                <con:actions>
                    <con2:route>
                        <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc</con1:id>
                        <con2:service ref="TrackService/business/CDService" xsi:type="ref:BusinessServiceRef" xmlns:ref="http://www.bea.com/wli/sb/reference"/>
                        <con2:operation>getTracksForCD</con2:operation>
                        <con2:outboundTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff9</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDRequest"/>
                                        <con1:param name="getTracksForCDRequest">
                                            <con1:path>$body/trac:getTracksForCDRequest</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:outboundTransform>
                        <con2:responseTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff6</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDResponse"/>
                                        <con1:param name="getTracksForCDResponse">
                                            <con1:path>$body/*[1]</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:responseTransform>
                    </con2:route>
                </con:actions>
            </con:route-node>
        </con:flow>
    </con:router>
</con:pipelineEntry>

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For more info about this, see my previous MBeans 11g article.

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found pipeline the metadata information (dependencies and dependents) is written to file.

Example content fragment of the text file:

Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
dependencies:
– BusinessService$TrackService$business$CDService
– WSDL$TrackService$proxy$TrackService

dependents:
– ProxyService$TrackService$proxy$TrackService
– ProxyService$TrackService$proxy$TrackServiceRest
– ProxyService$TrackService$proxy$TrackServiceRestJSON

As can be seen in the MBean Browser, the metadata for a particular pipeline shows the dependencies on other resources (like business services and WSDLs) and other services that are dependent on the pipeline.

For more info and the responsible code fragment see my previous MBeans 11g article.

Remark:
In the java code, dependencies on Xquery’s are filtered out and not written to the text file.

MBeans with regard to version 11.1.1.7

In the sample java code shown at the end of my previous MBeans 11g article, the use of the following MBeans can be seen:

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar

Therefor in JDeveloper 11g, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.common.configfwk_1.6.0.0.jar /oracle/fmwhome/Oracle_OSB1/modules/com.bea.common.configfwk_1.6.0.0.jar Sb-kernel-api.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-api.jar Sb-kernel-impl.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-impl.jar Wlfullclient.jar /oracle/fmwhome/wlserver_10.3/server/lib/wlfullclient.jar

For more info about these MBeans, see my previous MBeans 11g article.

In order to connect to a WebLogic MBean Server in my previous MBeans 11g article I used the thick client wlfullclient.jar.

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

So I build wlfullclient.jar as follow:

cd <Middleware Home Directory>/wlserver_10.3/server/lib
java -jar wljarbuilder.jar

In the sample java code shown at the end of this article, the use of the same MBeans can be seen. However in JDeveloper 12c, changes in Project Libraries and Classpath settings were necessary, due to changes in the jar files used in the 12c environment. Also the wlfullclient.jar is deprecated as of WebLogic Server 12.1.3 !

Overview of WebLogic Client jar files WebLogic Client Jar file Protocol WebLogic Full Client weblogic.jar (6 KB)
(Via the manifest file MANIFEST.MF, classes in other JAR files are referenced) T3 wlfullclient.jar (111.131 KB)
is deprecated as of WebLogic Server 12.1.3 T3 WebLogic Thin Client wlclient.jar (2.128 KB) IIOP wljmxclient.jar (238 KB) IIOP WebLogic Thin T3 Client wlthint3client.jar (7.287 KB) T3

Remark with regard to version 12.2.1:

Due to changes in the JDK, WLS no longer supports JMX with just the wlclient.jar. To use JMX, you must use either the ”full client” (weblogic.jar) or wljmxclient.jar.
[https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Full Client

The WebLogic full client, wlfullclient.jar, is deprecated as of WebLogic Server 12.1.3 and may be removed in a future release. Oracle recommends using the WebLogic Thin T3 client or other appropriate client depending on your environment.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT130]

For WebLogic Server 10.0 and later releases, client applications need to use the wlfullclient.jar file instead of the weblogic.jar. A WebLogic full client is a Java RMI client that uses Oracle’s proprietary T3 protocol to communicate with WebLogic Server, thereby leveraging the Java-to-Java model of distributed computing.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

Not all functionality available with weblogic.jar is available with the wlfullclient.jar. For example, wlfullclient.jar does not support Web Services, which requires the wseeclient.jar. Nor does wlfullclient.jar support operations necessary for development purposes, such as ejbc, or support administrative operations, such as deployment, which still require using the weblogic.jar.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

WebLogic Thin Client

In order to connect to a WebLogic MBean Server, it is also possible to use a thin client wljmxclient.jar (in combination with wlclient.jar). This JAR contains Oracle’s implementation of the HTTP and IIOP protocols.

Remark:
wlclient.jar is included in wljmxclient.jar‘s MANIFEST ClassPath entry, so wlclient.jar and wljmxclient.jar need to be in the same directory, or both jars need to be specified on the classpath.

Ensure that weblogic.jar or wlfullclient.jar is not included in the classpath if wljmxclient.jar is included. Only the thin client wljmxclient.jar/wlclient.jar or the thick client wlfullclient.jar should be used, but not a combination of both. [https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Thin T3 Client

The WebLogic Thin T3 Client jar (wlthint3client.jar) is a light-weight, high performing alternative to the wlfullclient.jar and wlclient.jar (IIOP) remote client jars. The Thin T3 client has a minimal footprint while providing access to a rich set of APIs that are appropriate for client usage. As its name implies, the Thin T3 Client uses the WebLogic T3 protocol, which provides significant performance improvements over the wlclient.jar, which uses the IIOP protocol.

The Thin T3 Client is the recommended option for most remote client use cases. There are some limitations in the Thin t3 client as outlined below. For those few use cases, you may need to use the full client or the IIOP thin client.

Limitations and Considerations:

This release does not support the following:

  • Mbean-based utilities (such as JMS Helper, JMS Module Helper), and JMS multicast are not supported. You can use JMX calls as an alternative to “mbean-based helpers.”
  • JDBC resources, including WebLogic JDBC extensions.
  • Running a WebLogic RMI server in the client.

The Thin T3 client uses JDK classes to connect to the host, including when connecting to dual-stacked machines. If multiple addresses available on the host, the connection may attempt to go to the wrong address and fail if the host is not properly configured.
[https://docs.oracle.com/middleware/12212/wls/SACLT/wlthint3client.htm#SACLT387]

MBeans with regard to version 12.2.1

As I mentioned earlier in this article, in order to get the Java code working in a 12.2.1 environment, I had to make some changes.

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.configfwk.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver/modules/com.bea.core.management.jmx.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-wls.jar

In JDeveloper 12c, the following Project Libraries and Classpath settings were made (at first):

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlfullclient.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlfullclient.jar

Using wlfullclient.jar:
At first I still used the thick client wlfullclient.jar (despite the fact that it’s deprecated), which I build as follow:

cd <Middleware Home Directory>/wlserver/server/lib
java -jar wljarbuilder.jar
Creating new jar file: wlfullclient.jar

wlfullclient.jar and jarbuilder are deprecated starting from the WebLogic 12.1.3 release.
Please use one of the equivalent stand-alone clients instead. Consult Oracle WebLogic public documents for details.

Compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using weblogic.jar:
When I changed wlfullclient.jar in to weblogic.jar the OSBServiceExplorer tool also worked.

Using wlclient.jar:
When I changed wlfullclient.jar in to wlclient.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlclient.jar and wljmxclient.jar:
Also adding wljmxclient.jar did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Adding wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then I got an error on:

String name = serverRuntimeMBean.getName();

I then decided to go for the, by Oracle recommended, WebLogic Thin T3 client wlthint3client.jar.

Using wlthint3client.jar:
When I changed wlfullclient.jar in to wlthint3client.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlthint3client.jar and wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then again I got an error on:

String name = serverRuntimeMBean.getName();

However I could run the OSBServiceExplorer tool in JDeveloper , but then I got the error:

Error(160,49): cannot access weblogic.security.ServerRuntimeSecurityAccess; class file for weblogic.security.ServerRuntimeSecurityAccess not found

I found that the following jar files could solve this error:

For the time being I extracted the needed class file (weblogic.security.ServerRuntimeSecurityAccess.class) from the smallest size jar file to a lib directory on the filesystem and in JDeveloper added that lib directory as a Classpath to the Project.

As it turned out I had to repeat these steps for the following errors I still got after I extended the Classpath:

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/utils/collections/WeakConcurrentHashMap

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/runtime/TimeServiceRuntimeMBean

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/partition/admin/ResourceGroupLifecycleOperations$RGState

After that, compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using the lib directory with the extracted class files, was not what I wanted. Adding the jar files mentioned above seemed a better idea. So I picked the jar files with the smallest size, to get the job done, and discarded the lib directory.

So in the end, in JDeveloper 12c, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Com.oracle.weblogic.management.base.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.management.base.jar Com.oracle.weblogic.security.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.security.jar Com.oracle.webservices.wls.jaxrpc-client.jar /u01/app/oracle/fmw/12.2/wlserver/modules/clients/com.oracle.webservices.wls.jaxrpc-client.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlthint3client.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlthint3client.jar Wls-api.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wls-api.jar

Shell script

For ease of use, a shell script file was created, using MBeans, to explore pipeline, proxy and business services. The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

The content of the shell script file “OSBServiceExplorer” is:

#!/bin/bash

# Script to call OSBServiceExplorer

echo “Start calling OSBServiceExplorer”

java -classpath “OSBServiceExplorer.jar:oracle.servicebus.configfwk.jar:com.bea.core.management.jmx.jar:oracle.servicebus.kernel-api.jar:oracle.servicebus.kernel-wls.jar:wlthint3client.jar:wls-api.jar:com.oracle.weblogic.security.jar:com.oracle.webservices.wls.jaxrpc-client.jar:com.oracle.weblogic.management.base.jar” nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

echo “End calling OSBServiceExplorer”

In the shell script file via the java executable, a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

Example content of the text file:

Found server runtimes:
- Server name: DefaultServer. Server state: RUNNING

Found total of 45 refs, including the following pipelines, proxy and business services:
- ProxyService: TrackService/proxy/TrackServiceRest
- BusinessService: MusicService/business/db_InsertCD
- BusinessService: TrackService/business/CDService
- Pipeline: TrackService/pipeline/TrackServicePipeline
- ProxyService: TrackService/proxy/TrackService
- Pipeline: MusicService/pipeline/MusicServicePipeline
- ProxyService: MusicService/proxy/MusicService
- ProxyService: TrackService/proxy/TrackServiceRestJSON

ResourceConfiguration list of pipelines, proxy and business services:
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/MusicService
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

    Index#4:
       level    = 1
       label    = route
       name     = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc
       node-id  = 4
       type     = Action
       children = [1,3]
    Index#6:
       level    = 1
       label    = route-node
       name     = RouteNode1
       node-id  = 6
       type     = RouteNode
       children = [5]

  Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$TrackService$business$CDService
      - WSDL$TrackService$proxy$TrackService

    dependents:
      - ProxyService$TrackService$proxy$TrackService
      - ProxyService$TrackService$proxy$TrackServiceRest
      - ProxyService$TrackService$proxy$TrackServiceRestJSON

- Resource: com.oracle.osb:Location=DefaultServer,Name=Operations$System$Operator Settings$GlobalOperationalSettings,Type=ResourceConfigurationMBean
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$MusicService$pipeline$MusicServicePipeline,Type=ResourceConfigurationMBean
  Resource is a Pipeline (without available Configuration)
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=jca, url=jca://eis/DB/MUSIC
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://127.0.0.1:7101/cd_services/CDService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRest
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/TrackService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRestJSON

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            }

            if ((level == 1 && type.equals("OperationalBranchNode")) ||
                level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
                
                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);            

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following pipelines, proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {
                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("Pipeline")) {
                    fileWriter.write("- Pipeline: " +
                                     ref.getFullName() + "\n");                    
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);
            
            fileWriter.write("ResourceConfiguration list of pipelines, proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                              
                try {
                    CompositeDataSupport configuration =
                        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                      "Configuration");
                      
                    if (canonicalName.contains("ProxyService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        String url = (String)transportconfiguration.get("url");
                        
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("BusinessService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        CompositeData[] urlconfiguration =
                            (CompositeData[])transportconfiguration.get("url-configuration");
                        String url = (String)urlconfiguration[0].get("url");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("Pipeline")) {
                        String servicetype =
                            (String)configuration.get("service-type");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype + "\n");
                    }
                    
                    if (canonicalName.contains("Pipeline")) {
                        fileWriter.write("" + "\n");
    
                        CompositeDataSupport pipeline =
                            (CompositeDataSupport)configuration.get("pipeline");
                        TabularDataSupport nodes =
                            (TabularDataSupport)pipeline.get("nodes");
    
                        TabularType tabularType = nodes.getTabularType();
                        CompositeType rowType = tabularType.getRowType();
    
                        Iterator keyIter = nodes.keySet().iterator();
    
                        for (int j = 0; keyIter.hasNext(); ++j) {
    
                            Object[] key = ((Collection)keyIter.next()).toArray();
    
                            CompositeData compositeData = nodes.get(key);
    
                            String label = (String)compositeData.get("label");
                            String type = (String)compositeData.get("type");
                            if (type.equals("Action") &&
                                (label.contains("wsCallout") ||
                                 label.contains("javaCallout") ||
                                 label.contains("route"))) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            } else if (type.equals("OperationalBranchNode") ||
                                       type.equals("RouteNode")) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            }
                        }
                        
                        fileWriter.write("" + "\n");
                        
                        CompositeDataSupport metadata =
                            (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                          "Metadata");
                        
                        fileWriter.write("  Metadata of " + canonicalName + "\n");
    
                        String[] dependencies =
                            (String[])metadata.get("dependencies");
                        fileWriter.write("    dependencies:\n");
                        int size;
                        size = dependencies.length;
                        for (int i = 0; i < size; i++) {
                            String dependency = dependencies[i];
                            if (!dependency.contains("Xquery")) {
                                fileWriter.write("      - " + dependency + "\n");
                            }
                        }
                        fileWriter.write("" + "\n");
    
                        String[] dependents = (String[])metadata.get("dependents");
                        fileWriter.write("    dependents:\n");
                        size = dependents.length;
                        for (int i = 0; i < size; i++) {
                            String dependent = dependents[i];
                            fileWriter.write("      - " + dependent + "\n");
                        }
                        fileWriter.write("" + "\n");                
                    }
                }
                catch(Exception e) {
                    if (canonicalName.contains("Pipeline")) {
                      fileWriter.write("  Resource is a Pipeline (without available Configuration)" + "\n");
                    } else {
                      e.printStackTrace();
                    }
                }
            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

The World Cup 2018 Challenge is live... An app created 12 years ago to showcase the awesome Oracle APEX

Dimitri Gielis - Tue, 2018-06-05 10:39

Since 2006 it's a tradition... every two years we launch a site where you can bet on the games of the World Cup (or Euro Cup). This year you find the app at https://www.wc2018challenge.com

You can read more about the history and see how things look like over time, or you can look on this blog at other posts in the different years.

The initial goal of the app was to showcase what you can do with Oracle Application Express (APEX). Many companies have Excel sheets where they keep the scores of the games and keep some kind of ranking for their employees. When I saw in 2006 that Excel sheet, I thought, oh well, I can do this in APEX, and it would give us way more benefits... results straight away, no sending of Excel sheets or merging data, much more attractive design with APEX etc. and from then on this app lives its own life.

Every two years I updated the app with the latest and greatest of Oracle APEX at that time.

Today the site is built in Oracle APEX 18.1 and it showcases some of the new features.
The look and feel is completely upgraded. Instead of a custom theme, the site is now using Universal Theme. You might think, it doesn't look like a typical APEX app, but it is! Just some minimal changes in CSS and a background image makes the difference.

The other big change is the Social Authentication, which is now using the built-in capabilities of APEX 18.1 instead of a custom authentication scheme I used the previous years. You can authenticate with Google, Facebook and with your own email (custom).

Some other changes came with JET charts and some smaller enhancements that came with APEX 5.1 and 18.1.

Some people asked me how certain features were done, so I'll do some separate blog posts about how Universal Theme was adapted on the landing page and how Social Authentication was included and what issues we had along the line. If you wonder how anything else was done, I'm happy to do some more posts to explain.

Finally, I would like to thank a few people who helped to make the site ready for this year: Erik, Eduardo, Miguel, Diego, Galan, and Theo, thanks so much!
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator