Feed aggregator

Brazil

Greg Pavlik - Sun, 2015-12-20 23:11
Blown away to get my purple belt in Brazilian Jiu Jitsu from 10th Planet black belt Alex Canders.


ORAMEX Tech Day 2015 Guadalajara: Global Cloud UX Goes Local

Usable Apps - Sun, 2015-12-20 22:59

You asked. We came. We were already there.

In November, the Oracle México Development Center (MDC) in Guadalajara hosted the ORAMEX Tech Day 2015 event. This great location gave the Grupo de Usuarios Oracle de México (the Oracle User Group in México) (@oramexico) access to the very strong technical community in the region, and attendees from Guadalajara and surrounding cities such as Colima, León, and Morelia heard MDC General Manager Erik Peterson kick off the proceedings with a timely keynote on the important role that MDC (now celebrating its 5th year) plays in delivering Oracle products and services.

Erik Peterson delivers the MDC keynote

Erik Peterson delivers the MDC keynote at the ORAMEX Tech Day.

Naturally, Tech Day was also a perfect opportunity for the Oracle Applications User Experience (UX) team to support our ORAMEX friends with the latest and greatest UX outreach.

UX team at ORAMEX Tech Day 2015

Oracle Applications UX staffers at ORAMEX Tech Day (Left to right): Sarahi Mireles, Tim Dubois, Rafael (Rafa) Belloni, and Noel Portugal (image courtesy of ORAMEX) 

UX team members from the U.S., Senior Manager UX Product and Program Management Tim Dubois (@timdubis) and Senior Manager Emerging Tech Development Noel Portugal (@noelportugal), joined local staffers Senior UX Design Developer Rafa Belloni (@rafabelloni) and UX Developer Sarahi Mireles (@sarahimireles) to demo the latest UX technical resources for the community, to bring everyone up to speed on the latest UX cloud strategy and messages, and to take the pulse of the local market for our cloud UX and innovation enablement and outreach.

Tim and Sarahi demoed the latest from the Release 10 Simplified UI PaaS4SaaS Rapid Development Kit (RDK) and Rafa and Noel showed off cool Internet of Things proof of concept innovations; all seamlessly part of the same Oracle UX cloud strategy.

Sarahi Mireles introduces the RDK

Sarahi leading and winning with the RDK 

Tim and Sarahi provided a dual-language (in English and Spanish), real-time, exploration of what the RDK is, why you need it, what it contains, and how you get started.

Tim Dubois deep dives into the RDK for ORAMEX audience

The long view: Tim explains that the RDK is part of an overall enablement strategy for the Oracle Cloud UX: Simple to use, simple to build, simple to sell solutions. 

You can get started by grabbing technology-neutral Simplified UI UX Design Patterns for Release 10 eBook. It's free. And, watch out for updates to the RDK on the "Build a Simplified UI" page on the Usable Apps website. Bookmark it now! 

Simplified UI UX Design Patterns eBook

Your FREE Simplified UI UX Design Patterns eBook for PaaS and SaaS is now available

ORAMEX Tech Day 2015 was a great success, representing an opportunity for OAUX to collaborate with, and enable, a local technical community and Oracle User Group, to demonstrate, in practical ways, our commitment to bringing that must-have cloud UX message and resources to partners and customers worldwide, and of course, to show examples of the awesome role the MDC UX team plays within Oracle.

 UX Team Ready to Fly

Where will we go next? I wonder…

What's next? Stay tuned to the Usable Apps website for event details and how you can participate in our outreach and follow us on Twitter at @usableapps for up-to-the-minute happenings!

Special thanks goes to Plinio Arbizu (@parbizu) and Rolando Carrasco (@borland_c) and to the rest of the ORAMEX team for inviting us and for organizing such a great event.

WS-Policy Support for IWS

Anthony Shorten - Sun, 2015-12-20 20:43

One of the major advantages of Inbound Web Services (IWS) is the support for security based around the WS-Policy standard. By supporting WS-Policy it is now possible to support a wide range of transport and message security standards within each individual web service. It is also possible to support multiple policies. This allows maximum flexibility for interfacing to Oracle Utilities products using the WS-Policy support provided by Oracle WebLogic and IBM WebSphere (for selected products).This means the Web Services client calling our Inbound Web Services must comply with at least one of the WS-Policy directives attached to the service.

The support for WS-Policy is implemented in a number of areas:

  • It is possible to attach custom WS-Policy compliant policies directly to the Inbound Web Service as an annotation. The Oracle Utilities product supplies an optional default annotation to implement backward compatibility with XML Application Integration (XAI). This allows customers using XAI to more for Inbound Web Services. Oracle recommends not to attach policies within the Inbound Web Services definition as that can reduce the flexibility of your interfaces.
  • It is possible to attach policies within the J2EE Web Application service AFTER deployment to individual web services. This information is retained across deployments using a deployment file that is generated by the container at deployment time. In Oracle WebLogic this is contained in the deployment plan generated by the deployment activity, it will  reapply the policies during each redeployment process automatically. For Oracle WebLogic, a large number of WS-Policy files are supported for message and transport.
  • For Oracle WebLogic EE customers, it is possible to also use Oracle Web Services Manager to attach additional WS-Policy security policies supported by that product. Again this is done by deployment time. The advantage of Oracle Web Services Manager is that it reuses all of the policies supplied with Oracle WebLogic, adds advanced policies and also adds access control functionality rules you can attach to an Inbound Web Service to control when, who and where it is used.

The bottom line is that you can use any policy (supplied with the J2EE container or custom) that is supported by the J2EE container. You cannot introduce a policy that is not compatible with the container itself as we delegate security to the container.

The only thing we do not support at the present is applying a WS-Policy to part of a message or at the operation level. The WS-Policy is applies across the web service.

IT Tage 2015 - "Analysing and troubleshooting Parallel Execution" presentation material

Randolf Geist - Sun, 2015-12-20 12:24
Thanks to all attending my presentation "Analysing and troubleshooting Parallel Execution" at the IT Tage conference 2015 in Frankfurt, Germany. You can download the presentation material here in Powerpoint of PDF format, as well as check the Slideshare upload.

Note that the Powerpoint format adds value in that sense that many of the slides come with additional explanations in the notes section.

If you are interested in more details I recommend visiting this post which links to many other posts describing the different new features in greater depth.

Join OTN team @ Oracle CloudWorld Developer January 19th..in NYC!

OTN TechBlog - Fri, 2015-12-18 12:10

The OTN team is excited to see you at the Oracle CloudWorld Developer event in New York, January 19, 2016. Join us to learn how you can leverage Oracle Cloud Platform technologies for the complete development lifecycle! Explore Oracle Paas that enables you to deliver better quality code with the agility you need to meet today's IT challenges.

  

Hear from the top technology experts about how cloud will transform your development organizations through sessions spanning across four tracks: Database, DevOps, Enterprise Integration and Mobile. Attend this free event and enjoy networking opportunities with your peers, product demos, break-out sessions, and hear from Chris Tonas, Vice President of Mobility and Development Tools at Oracle on best practices for development in the cloud.

Who should attend?
  • Developers 
  • Architects 
  • Database Administrators 
  • System Administrators 
  • Project Managers 
  • Students

Oracle Cloud – Glassfish Port 4848 Madness?

John Scott - Thu, 2015-12-17 05:20

In my last post on accessing Glassfish, it was a few days later and something dawned on me.

In the last post I mentioned that Glassfish was running on Port 4848, however when I accessed the DBaaS monitor I was able to access it via HTTP/HTTPs which run on port 80 and 443 respectively.

So, the question is, how am I able to access both APEX and DBaaS monitor via ports 80 / 443 when Glassfish is running on port 4848?

If you checked the DBaaS instance for the ports that are listening, using a command similar to this

[root@DEMO ~]# netstat -an | grep LISTEN
tcp 0 0 0.0.0.0:37764 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 :::5500 :::* LISTEN
tcp 0 0 :::16386 :::* LISTEN
tcp 0 0 :::12164 :::* LISTEN
tcp 0 0 :::5000 :::* LISTEN
tcp 0 0 ::ffff:127.0.0.1:5006 :::* LISTEN
tcp 0 0 :::111 :::* LISTEN
tcp 0 0 :::8080 :::* LISTEN
tcp 0 0 :::1521 :::* LISTEN
tcp 0 0 :::8181 :::* LISTEN
tcp 0 0 :::22 :::* LISTEN
tcp 0 0 ::1:631 :::* LISTEN

You can see there’s nothing listening on port 80 (HTTP) or 443 (HTTPS). So how is our web request being handled? This did confuse me for more than a few minutes.

Based on having used Amazon AWS for years, I had a quick look in the network rules as I expected some Port Forwarding  rules doing the magic conversion of relaying traffic from port 80 to 4848 etc.

However…

network_forward.png

nothing there at all…I couldn’t even see an option for network port forwarding (this IMHO is pretty confusing, since I’d expect it to be here).

The answer turned out to be pretty simple. The GUI shows network rules enforced outside of the DBaaS instance itself, if you login to the DBaaS instance there are also firewall rules configured there.

Let’s SSH into the machine using our SSH key

[jes@mac oracle-cloud]$ ssh -i oracle_cloud_rsa opc@<my.public.ip.here>
[opc@DEMO ~]$

now, let’s SUDO to the root user

[opc@DEMO ~]$ sudo su -
[root@DEMO ~]#

and let’s check the firewall rules setup using iptables

[root@DEMO ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Hmmm this threw me, I did expect something to be listed here.

Long story short, it’s the PREROUTING rules we need to look at, which can do via a command similar to

[root@DEMO ~]# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 8080
REDIRECT udp -- anywhere anywhere udp dpt:http redir ports 8080
REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8181
REDIRECT udp -- anywhere anywhere udp dpt:https redir ports 8181

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination

So here you can see that any traffic coming into the http ports is redirected to port 8080 and any https traffic is redirected to port 8181 (which is the SSL port that Glassfish listens on to).

So it’s these ‘magically transparent’ and ‘not very obvious’ iptables rules that make the incoming HTTP/HTTPS traffic get redirected internally to Glassfish running on Port 80.

Why is this relevant and why should you care?

Well this is important if (for example) you didn’t want users to directly access (such an old version of) Glassfish and instead put a Proxy like NGINX infront of Glassfish. You would need to remove / modify those pre-routing rules so that the traffic would go to NGINX (or Apache or whatever) first and then be reverse proxied from NGINX to Glassfish (this is something we do in our production instances.


11i pre-upgrade data fix script ap_wrg_11i_chrg_alloc_fix.sql runs very slow

Vikram Das - Wed, 2015-12-16 20:51
We are currently upgrading one of our ERP instances from 11.5.10.2 to R12.2.5.  One of the pre-upgrade steps is to execute the data fix script ap_wrg_11i_chrg_alloc_fix.sql.  However, this script has been running very very slow. After 4 weeks of monitoring, logging SRs with Oracle, escalating etc., we started a group chat today with our internal experts.  We had Ali, Germaine, Aditya, Mukhtiar, Martha Gomez and Zoltan.  I also invited our top notch EBS Techstack expert John Felix. After doing explain plan on the sql, Based on the updates being done by the query I predicted that it will take 65 days to complete.

John pointed out that the query was using the index AP_INVOICE_DISTRIBUTIONS_N4  that had a very high cost.  We used an sql profile that replaced AP_INVOICE_DISTRIBUTIONS_N4  with AP_INVOICE_DISTRIBUTIONS_U1.  The query started running faster and my new prediction was that it would complete in 5.45 days.

John mentioned that now another select statement was using the same index AP_INVOICE_DISTRIBUTIONS_N4 that had a very high cost.

After discussing among ourselves, we decided to drop the index, run the script and re-create the index. Aditya saved the definition of the index and dropped it.

DBMS_METADATA.GET_DDL('INDEX','AP_INVOICE_DISTRIBUTIONS_N4','AP')
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  CREATE INDEX "AP"."AP_INVOICE_DISTRIBUTIONS_N4" ON "AP"."AP_INVOICE_DISTRIBUTIONS_ALL" ("ACCOUNTING_DATE")
  PCTFREE 10 INITRANS 11 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "APPS_TS_TX_IDX"

1 row selected.

SQL> drop index AP.AP_INVOICE_DISTRIBUTIONS_N4;

Index dropped.

The updates started happening blazing fast.  The whole thing got done in 39 minutes and we saw the much awaited:

SQL> set time on
16:34:16 SQL> @ap_wrg_11i_chrg_alloc_fix.sql
Enter value for resp_name: Payables Manager
Enter value for usr_name: 123456
-------------------------------------------------------------------------------
/erp11i/applcsf/temp/9570496-fix-16:34:40.html is the log file created
-------------------------------------------------------------------------------

PL/SQL procedure successfully completed.

17:13:36 SQL>

From 65 days to 5.45 days to 39 minutes.  Remarkable.  Thank you John for your correct diagnosis and solution.
Categories: APPS Blogs

Yes, there is MORE...OTN VTS Replay Content to binge on!

OTN TechBlog - Tue, 2015-12-15 14:50

As a reminder the Virtual Technology Summit content is highly technical demos, presentations and HOL prepared by both internal and external Oracle product experts.  Below are the latest highlighted sessions from the replay library groups on community.oracle.com.

Now onto the bingeing.... 

Pi on Wheels, Make Your Own Robot 

By Michael Hoffer, computer scientist, Goethe-Center for Scientific Computing in Frankfurt

The Pi on Wheels is an affordable open source DIY robot that is ideal for learning Java-related technologies in the context of the 
Internet of Things. In this session we will talk about how 3D printing works and how it can be utilized to build robots. The most 
fascinating aspect of 3D printing is that it is astonishingly easy to customize the robot. It allows you to build something completely 
new and different. We provide a Java based IDE that allows you to control and program the robot. In addition to that it can be used 
to programmatically design 3D geometries.  


By Alex Barclay, Principal Product Manager, Solaris and Systems Security, Oracle

Learn and understand about the security threats to your public and private cloud and gain insight into how the Oracle Security 
Architecture helps reduce risk. This webcast will provide detailed information on the top 20 cloud security threats and how different 
parts of the Oracle systems stack help eliminate each threat.

By Christian Shay, Principal Product Director, Oracle

This session explores .NET coding and tuning best practices to achieve faster data access performance. It presents techniques 
and trade-offs for optimizing connection pooling, caching, data fetching and updating, statement batching, and Oracle datatype 
usage. We will also explore using Oracle Performance Analyzer from Visual Studio to tune a .NET application's use of the Oracle 
Database end to end.

By Kent Graziano, Oracle ACE Director and Principal of Data Warrior
Oracle SQL Developer Data Modeler (SDDM) has been around for a few years now and is up to version 4.1. It really is an industrial 
strength data modeling tool that can be used for any data modeling task you need to tackle. This presentation will demonstrate at 
least five features, tips, and tricks that I rely on to make me more efficient (and agile) in developing my models. 

Get SDDM installed on your device and bring it to the session so you can follow along.

By Shukie Ganguly

WebLogic Server 12.1.3 provides support for innovative APIs and productive Tools for application development, including APIs for 
JAX-RS 2.0, JSON Processing (JSR 353), WebSocket (JSR 356), and JPA 2.1. This session from the July 2015 OTN Virtual 
Technology Summit provides an overview of each of these APIs, and then demonstrate how you can use these capabilities to 
simplify the development of server applications accessed by "rich" clients using lightweight web-based protocols such as REST 
and WebSocket.



More On Wearable Tech

Floyd Teter - Mon, 2015-12-14 14:04
I've been going through an amazing experience over the past month plus...purchased and began wearing an Apple iWatch.  Never really thought I would do so...kind of did it on the spur of the moment.  Plus a little push from my team lead at Oracle, who wears one and loves it.

Even after a month of wearing the iWatch, I can't really point at one particular feature that makes it worthwhile.  It's really more a collection of little things that add up to big value.

One example:  I have a reputation far and wide for being late for meetings (could be a Freudian thing, considering how much I detest meetings).  5 minutes before a meeting begins, my iWatch starts to vibrate on my wrist like a nano-jackhammer.  My punctuality for meetings is much improved now, much to the joy of my managers, peers and customers.

Another example:  I can take a phone call wherever I am, distraction free.  That's right, calling Dick Tracy.  Driving, taking a walk, whatever...we can talk now.

Notifications are wonderfully designed...much better than the iPhone or the iPad or the Mac or whatever.  I've actually turned on Notifications again, because they notify without being intrusive or distracting.

A few other dandies as well, like the idea of getting through the security line with the iWatch is a bit quicker than the iPhone or the much-improved implementation of Siri making voice-dictation for texting something I can now user reliably.

So its improved my productivity... not so much by hitting a home run in any particular area, but through lots of incremental little improvements.  Pretty cool wearable tech.

Oracle Apex 5.0 and APEX_JSON

Kubilay Çilkara - Sat, 2015-12-12 04:09
How many lines of code does it take to make a web service call? Answer: 39

That is how many lines of PL/SQL I had to write in Oracle Apex 5.0 to make a web service call to an external API.

I used Adzuna's REST API to retrieve the latitude and longitude and the price of 2 bed properties for rent in a specific location in UK. The API returns JSON which the APEX_JSON package is able to parse easily. Adzuna is a property search engine which also provides aggregate data for properties in various countries around the world.

I think the native APEX_JSON package in Oracle Apex 5.0 is very useful and makes integrating web services to your Oracle Apex applications very easy. Here is application I have created in matter of hours which shows you average rent properties in a location of your choice in UK.

Here is the link for the app:  http://enciva-uk15.com/ords/f?p=174:1













And here is the code:


If you want to run this as is in your SQL Workshop, make sure you replace {myadzunaid:myadzunakey} in the code with your adzuna app_id and app_key which you can obtain from the Adzuna website https://developer.adzuna.com/ as I have removed mine from the code. They also have a very good interactive api documentation here http://api.adzuna.com/static/swagger-ui/index.html#!/adzuna


create or replace procedure get_rent_data(p_where in varchar2, p_radius in number, p_room in number)
is
v_resp_r clob;
j apex_json.t_values;
l_paths apex_t_varchar2;
v_id varchar(50);
v_lat decimal(9,6);
v_lon decimal(9,6);
v_rent number(10);

begin
-- http housekeeping
apex_web_service.g_request_headers(1).name  := 'Accept'; 
apex_web_service.g_request_headers(1).value := 'application/json; charset=utf-8'; 
apex_web_service.g_request_headers(2).name  := 'Content-Type'; 
apex_web_service.g_request_headers(2).value := 'application/json; charset=utf-8';

v_resp_r := apex_web_service.make_rest_request 
      ( p_url => 'http://api.adzuna.com:80/v1/api/property/gb/search/1' 
      , p_http_method => 'GET' 
      , p_parm_name => apex_util.string_to_table('app_id:app_key:where:max_days_old:sort_by:category:distance:results_per_page:beds') 
      , p_parm_value => apex_util.string_to_table('{myadzunaid:myadzunakey}:'||p_where||':90:date:to-rent:'||p_radius||':100:'||p_room||'') 
      );
-- parse json
apex_json.parse(j, v_resp_r);


-- start looping on json
l_paths := apex_json.find_paths_like (
        p_values         => j,
        p_return_path => 'results[%]',
        p_subpath       => '.beds',
        p_value           => '2' );
        
for i in 1 .. l_paths.count loop
       v_id := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.id'); 
       v_rent := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.price_per_month'); 
       v_lat := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.latitude');
       v_lon := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.longitude');

-- debug print values to page
 htp.p(v_id||'-'||v_lat||','||v_lon||'Rents : £'||v_rent);

end loop;

END;

Categories: DBA Blogs

PHP 7 OCI8 2.1.0 available on PECL

Christopher Jones - Fri, 2015-12-11 23:47

I've released PHP 7 OCI8 2.1 on PECL and simultaneously made a patch release OCI8 2.0.10 which is compatible with PHP 5.2 - PHP 5.6.

To install OCI8 for PHP 7 use:

pecl install oci8

This installs OCI8 2.1 which, as I'm sure you can guess, had a lot of internal changes to make it compatible with the vastly changed internals of PHP 7.

If you want to install OCI8 for PHP 5.2, 5.3, 5.4, or 5.6 use:

pecl install oci8-2.0.10

Functionality in 2.0.10 and 2.1.0 is equivalent. They both contain the fix for bug 68298, an overflow when binding 64bit numbers.

[Update: Windows DLLs have been built.] At time of writing, Windows DLLs were not yet built on PECL. If you need them, you can grab them from the full PHP Windows bundle.

Five Questions to Ask Before Purchasing a Data Discovery Tool

Kubilay Çilkara - Thu, 2015-12-10 17:14
Regardless of what type of industry or business you are involved in, the bottom-line goal is to optimize sales; and that involves replacing any archaic tech processes with cutting-edge technology and substituting any existing chaos with results-driven clarity.
Data discovery tools, being a business-intelligence architecture, creates that clarity through the incorporation of a user-driven process that searches for patterns or specific items in a data set via interactive reports.  Visualization is a huge component of data discovery tools.  One can merge data from multiple sources into a single data set from which one can create interactive, stunning dashboards, reports and analyses.  The user is able to observe data come to life via striking visualizations.  Furthermore, business users want to perform their own data analysis and reporting—with a data discovery tool they can!  After it’s all said and done, smarter business decisions are generated; and that drives results.
Before purchasing a data discovery tool, however, several questions should be addressed:

1: What About Data Prep?

It’s important to realize that there are companies who will claim their data-discovery products are self-service; but keep in mind that many of the products will necessitate a data prep tool in order to access the data to be analyzed.  Preparing data is challenging; and if a data prep tool is not included, one must be purchased.  Choose a data discovery tool that enables data prep to be handled without any external support.
As a side note:  governed self-service discovery provides easy access to data from IT; and an enterprise discovery platform will give IT full visibility to the data and analytic applications while it meets the business’s need for self-service.  Business users embrace the independence they are empowered with to upload and combine data on their own.  

2:  Is Assistance from IT Required for Data Discovery Tools?

Business users desire the ability to prepare their own, personal dashboards and explore data in new ways without needing to heavily rely on IT.  Data discovery tools do not require the intervention of IT professionals, yet the relationship with IT remains.  Data discovery tools empower the business to self-serve while maintaining IT stewardship.  Data discovery tools allow users to directly access data and create dashboards that are contextual to their needs—whenthey need it and how they need it!  This, in turn, reduces the number of requests for reports and dashboards from IT staff and allows those professionals to focus more intently on development projects and system improvements.  Software solutions that support data discovery, such as business intelligence platforms with innovative visualization capabilities, are zealously supported by non-technical business users since they can perform deep, intuitive analysis of any enterprise information without reliance on IT assistance.  
      
3:  Will Data Discovery Tools Allow One to Interact with the Data?

The fun thing is, you can play with the data to the point of being able to create, modify and drill down on a specific display.  A beautiful feature of data discovery tools is the interactive component which permits one to interact with corporate data sources visually to uncover hidden trends and outliers.  Data discovery facilitates intuitive, visual-based and multivariate analysis via selecting, zooming, pivoting, and re-sorting to alter visuals for measurement, comparisons and observation.

4:  Are Data Discovery Tools Intended for Enterprise Use?

Enabling the business to self-serve while maintaining IT stewardship creates reliable decisions the enterprise can rely on.  Data discovery tools are invaluable for enterprise use—organizations can plan their approach to incorporate data discovery tools into their infrastructure and business practice. 
Data discovery tools allow one to retrieve and decipher data from spreadsheets, departmental databases, enterprise data warehouse and third-party data sources more efficiently than ever!  Multidimensional information can be transformed into striking graphical representations—3D bar and pie charts, histograms, scatter plots and so much more!  Data discovery tools deliver enterprise solutions within the realms of business information and analytics, storage, networks & compliance, application development, integration, modernization and database servers and tools.  

5:  With Data Discovery Tools Can I Retrieve Answers At any Time?

Data discovery tools will allow you to make inquiries and get answers quickly and seamlessly.  Geographic location will make no difference since files can be loaded on a laptop or even a mobile phone or other mobile devices.  With a few clicks, you can unlock all your data from servers, a mainframe or a PC. 
Multiple charts, graphs, maps and other visuals can, all, be combined in analytic dashboards and interactive apps.  Answers to crucial questions and issues can be quickly unveiled.  Analysts can share the data, with ease, among all users via the web and mobile devices—all operating like a fine-tuned engine—anytime, anywhere.       

Data discovery tools are changing business intelligence!

Mike Miranda writes about enterprise software and covers products offered by software companies like rocket software about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data, Enterprise Mobility and more.
Categories: DBA Blogs

The Infogram is Moving!

Oracle Infogram - Thu, 2015-12-10 17:10
See us at our new site: Oracle Priority Support Infogram.

A Love Letter to Singapore Airlines

Doug Burns - Thu, 2015-12-10 14:25
I used to be a Star Alliance Gold card holder from my extensive travel with BMI and other *A carriers. Eventually my travel tailed off a little and I dropped to Silver *just* before BA took over BMI and my *A status was converted to One World. Which was ok, because a BA Silver is in many ways similar to other airlines gold with all the lounge access I could need. The chances of getting or retaining a BA Gold card were about the same as those of me becoming a teetotal vegan, so I settled into my new position in life ;-)

However, it was a little disappointing and strange that I switched over to One World just before I landed a job in Singapore. In my *A days, everyone knew that Singapore Airlines were *the* top *A carrier (honourable mention to Air New Zealand) and so they always cropped up in any forum conversation about how best to use miles. Now I was in the perfect place to earn and redeem miles well, my new employer always uses SQ for travel but I was kind of stuck with my BA Silver and a whole bunch of miles and partner vouchers and the rest. To give you an example, when my new employer was helping me book our business class flights to move out to Singapore, you could tell they were a little confused as to why we weren't choosing SQ. Tier points, of course! ;-)

Don't get me wrong, BA are great and I've had some good rewards over the past few years, but my choice of loyalty program suddenly felt out of step with my life so I was considering a gradual cutover to KrisFlyer. But SQ never do status matches (as far as I know), so it was going to take a while. Making it worse was the fact that I've grown to like Cathay Pacific and so the temptation to stay put in OneWorld is stronger.

Anyway, I've said enough to merely touch on my intense geekery about airline loyalty programs and, for that matter, airlines and air travel in general.

However, the experience of last week has convinced me that Singapore Airlines are unparalleled in their customer service. The fleet and on-board service are already great, even in Economy (really - Europeans should try a dose of Far Eastern mid-haul travel to see the difference), but Customer Service is such a difficult thing to get right and SQ absolutely knocked the ball out of the park!

I'm terrible with names and remembering them but, in any case, there was such a large team of people over the course of 3 and a half days that were almost uniformly excellent, professional and warm that I'm not sure I want to single anyone out. I will pick out a few small examples (in order of the time they happened) but I'm not sure that will communicate just how happy everyone I know was with the customer service.

- I was constantly having struggles getting out of the terminal for a smoke and, on one occasion, I asked one of the senior ground crew how I could get out and he walked me out personally, dealt with security and stood there while I had a smoke, so he could then help me back into the terminal. He was a smoker too, so he understood, but he didn't have one himself. Absolutely not his job, but just caring about different passengers needs.

- At every single turn (and the passengers discussed this several times amongst ourselves), the airline made the right decision, at just the right time and so it always felt like we were ahead of the game. They couldn't change the situation or unblock the blockages but once they realised there was a blockage, they simply found a way around it. They didn't pussy-foot about and there was only very rarely a sense of "what's happening here?". Even in those moments, it was really just about the airline trying to work out for themselves what was happening.

- There were very few changes in team members. Where we went, they went. When we were able to sleep, even if it was on the floor of the terminal, they weren't. When we were able to sit and relax in the hotel, they were still chasing around trying to make plans for us despite having no sleep themselves. Whatever challenges we experienced, they experienced worse because they couldn't have a seat, grab a nap, get a shower or whatever either and not once did I get any sense that it bothered them. They must have been *shattered* with tiredness and they never let it show or gave even a hint of this not being their favourite work experience!

- When the Regional Manager turns up to deliver a short speech to passengers who haven't seen a bed or a shower in over 50 hours and is basically telling them that there's no quick end in sight and they *applaud* you spontaneously during your speech and at the end, you know you're doing this thing right. Embarassing though it is to admit it, and I suspect my extreme tiredness was a factor, I was practically wiping away a tear! In retrospect, I realise that it was because they seemed to genuinely care about our predicament. It's difficult to define the difference between this and fake customer care but it was clear as day if you were there. He then hung around until every single passenger had asked whatever weird and wonderful questions they had and answered them with calm authority and honesty.

- The free food was endless and of great quality, despite my personal preferences. Not your European - here's a voucher for a cup of coffee and a sandwich. Instead - here are three proper meals a day at the right time. I'm guessing this was very important to most people, particularly the large number of families among the passengers and in the end (as you'll see in another blog post), they moved us at one point from one hotel to another, just so people could eat and wash.

- As soon as it became clear that MAA was shut down for days, they made a (heavily caveated) promise that they would try to organise some extra capacity out of Bangalore as the fastest way to get us home. They had to work with the air authorities on this, they were in the midst of every airline trying to do the same, were operating to tight timescales and were honest with us that it was starting to look unlikely and so spent hours trying to rebook people on to other flights to Mumbai and other routes. But they came through. They promised they would try something for us, they worked on it and worked on it until they made it happen and they got people home.

I can't emphasise enough how fantastic SQ were over my 85 hour (read that again - 85 hour) trip home. If it was just me saying this, then it would be personal taste, but a bunch of extremely tired passengers with a wide demographic all seemed to agree whenever we discussed it or I heard others discussing it. The interesting (but really unsurprising thing), is that I also found my fellow passengers understanding and behaviour far above what I've ever experienced in a situation like this. Mmmmm ... maybe when you treat people well, they behave well?

So, Seah Chee Chian and your team ... You should be extremely proud of yourselves! But I mean the whole team, working selflessly over hours and days and showing genuine care for your customers, which is so rare. I'm not a fan of low cost airlines in general - each to their own - so the small difference in fares has never been a question for me and it's at times like this you remember you get what you pay for! However, I can put Singapore's efforts up against any full-fare airline I've ever flown with and I can't think of one that would have handled things as impressively. I just always knew I could count on SQ to take care of me.

You have a fan for life!


P.S. All of this and having the best airport on the planet (SIN) as your hub. What more could I ask for?

P.P.S. I was obviously delighted to get any seat on any plane going back to Singapore to be home again with Mads. So when I was asked whether I was happy to be downgraded to Economy it wasn't a long consideration, but I'll obviously be reclaiming the cost of that upgrade. I mean, the experience hasn't changed me *that* much! ;-)

P.P.P.S. ... and you would think that such a glowing tribute to such an amazing airline might, you know, increase my chances of an upgrade one day. (See? Ever the frequent flyer! LOL)

Playing with graphOra and Graphite

Marcelo Ochoa - Thu, 2015-12-10 11:57
Following Neto's blog post about graphOra (Docker Image) – Oracle Real Time Performance Statistics I did my personal test using a Docker image for 12c.
First, I started a 12c Docker DB using:
# docker run --privileged=true --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102Next starting Graphite Docker image:
# docker run --name graphs-db -p 80 -p 2003 -p 2004 -p 7002 --rm -ti nickstenning/graphiteNext installing graphOra repository:
# docker run -ti --link ols:oracle-db netofrombrazil/graphora --host oracle-db --port 1521 --sid ols --create
Enter sys password: -------
Creating user graphora
Grant access for user graphora to create sessions
Grant select privilege on V$SESSION_EVENT, V$SYSSTAT, V$STATNAME for user graphora
---
GraphOra is ready to collect your performance data!

Finally starting the graphOra Docker image:
# docker run -ti --link ols:oracle-db --rm --link graphs-db netofrombrazil/graphora --host oracle-db --port 1521 --sid ols --interval 10 --graphite graphs-db --graph-port 2003
phyReads: 0 phyWrites: 0 dbfsr: 43.30 lfpw: 43.30
phyReads: 0 phyWrites: 0 dbfsr: 0.00 lfpw: 0.00
phyReads: 0 phyWrites: 0 dbfsr: 0.00 lfpw: 0.00
and that's all, happy monitoring.
Here an screenshot from my monitored session:
Note on parameters used
First from the original post is mandatory to remove parameter graphOra, I think is due to changes on the image build of  Docker image netofrombrazil/graphora.
Second I used --link Docker syntax to avoid IP usage on command line options, that is, my Oracle DB is running on a container named ols, Graphite server running on a container named graphs-db, so by passing parameters --link ols:oracle-db --link graphs-db graphOra container receives connectivity and /etc/host file updated with the IP address of both related containers.

Readings in Database Systems

Curt Monash - Thu, 2015-12-10 06:26

Mike Stonebraker and Larry Ellison have numerous things in common. If nothing else:

  • They’re both titanic figures in the database industry.
  • They both gave me testimonials on the home page of my business website.
  • They both have been known to use the present tense when the future tense would be more accurate. :)

I mention the latter because there’s a new edition of Readings in Database Systems, aka the Red Book, available online, courtesy of Mike, Joe Hellerstein and Peter Bailis. Besides the recommended-reading academic papers themselves, there are 12 survey articles by the editors, and an occasional response where, for example, editors disagree. Whether or not one chooses to tackle the papers themselves — and I in fact have not dived into them — the commentary is of great interest.

But I would not take every word as the gospel truth, especially when academics describe what they see as commercial market realities. In particular, as per my quip in the first paragraph, the data warehouse market has not yet gone to the extremes that Mike suggests,* if indeed it ever will. And while Joe is close to correct when he says that the company Essbase was acquired by Oracle, what actually happened is that Arbor Software, which made Essbase, merged with Hyperion Software, and the latter was eventually indeed bought by the giant of Redwood Shores.**

*When it comes to data warehouse market assessment, Mike seems to often be ahead of the trend.

**Let me interrupt my tweaking of very smart people to confess that my own commentary on the Oracle/Hyperion deal was not, in retrospect, especially prescient.

Mike pretty much opened the discussion with a blistering attack against hierarchical data models such as JSON or XML. To a first approximation, his views might be summarized as: 

  • Logical hierarchical models can be OK in certain cases. In particular, JSON could be a somewhat useful datatype in an RDBMS.
  • Physical hierarchical models are horrible.
  • Rather, you should implement the logical hierarchical model over a columnar RDBMS.

My responses start:

  • Nested data structures are more important than Mike’s discussion seems to suggest.
  • Native XML and JSON stores are apt to have an index on every field. If you squint, that index looks a lot like a column store.
  • Even NoSQL stores should and I think in most cases will have some kind of SQL-like DML (Data Manipulation Language). In particular, there should be some ability to do joins, because total denormalization is not always a good choice.

In no particular order, here are some other thoughts about or inspired by the survey articles in Readings in Database Systems, 5th Edition.

  • I agree that OLTP (OnLine Transaction Processing) is transitioning to main memory.
  • I agree with the emphasis on “data in motion”.
  • While I needle him for overstating the speed of the transition, Mike is right that columnar architectures are winning for analytics. (Or you could say they’ve won, if you recognize that mop-up from the victory will still take 1 or 2 decades.)
  • The guys seem to really hate MapReduce, which is an old story for Mike, but a bit of a reversal for Joe.
  • MapReduce is many things, but it’s not a data model, and it’s also not something that Hadoop 1.0 was an alternative to. Saying each of those things was sloppy writing.
  • The guys characterize consistency/transaction isolation as a rather ghastly mess. That part was an eye-opener.
  • Mike is a big fan of arrays. I suspect he’s right in general, although I also suspect he’s overrating SciDB. I also think he’s somewhat overrating the market penetration of cube stores, aka MOLAP.
  • The point about Hadoop (in particular) and modern technologies in general showing the way to modularization of DBMS is an excellent one.
  • Joe and Mike disagreed about analytics; Joe’s approach rang truer for me. My own opinion is:
  • The challenge of whether anybody wants to do machine learning (or other advanced analytics) over a DBMS is sidestepped in part by the previously mentioned point about the modularization of a DBMS. Hadoop, for example, can be both an OK analytic DBMS (although not fully competitive with mature, dedicated products) and of course also an advanced analytics framework.
  • Similarly, except in the short-term I’m not worried about the limitations of Spark’s persistence mechanisms. Almost every commercial distribution of Spark I can think of is part of a package that also contains a more mature data store.
  • Versatile DBMS and analytic frameworks suffer strategic contention for memory, with different parts of the system wanting to use it in different ways. Raising that as a concern about the integration of analytic DBMS with advanced analytic frameworks is valid.
  • I used to overrate the importance of abstract datatypes, in large part due to Mike’s influence. I got over it. He should too. :) They’re useful, to the point of being a checklist item, but not a game-changer. A big part of the problem is what I mentioned in the previous point — different parts of a versatile DBMS would prefer to do different things with memory.
  • I used to overrate the importance of user-defined functions in an analytic RDBMS. Mike had nothing to do with my error. :) I got over it. He should too. They’re useful, to the point of being a checklist item, but not a game-changer. Looser coupling between analytics and data management seems more flexible.
  • Excellent points are made about the difficulties of “First we build the perfect schema” data warehouse projects and, similarly, MDM (Master Data Management).
  • There’s an interesting discussion that helps explain why optimizer progress is so slow (both for the industry in general and for each individual product).

Related links

  • I did a deep dive into MarkLogic’s indexing strategy in 2008, which informed my comment about XML/JSON stores above.
  • Again with MarkLogic as the focus, in 2010 I was skeptical about document stores not offering joins. MarkLogic has since capitulated.
  • I’m not current on SciDB, but I did write a bit about it in 2010.
  • I’m surprised that I can’t find a post to point to about modularization of DBMS. I’ll leave this here as a placeholder until I can.
  • Edit: As promised, I’ve now posted about the object-relational/abstract datatype boom of the 1990s.

Using Apache Drill REST API to Build ASCII Dashboard With Node

Tugdual Grall - Thu, 2015-12-10 04:56
Read this article on my new blog Apache Drill has a hidden gem: an easy to use REST interface. This API can be used to Query, Profile and Configure Drill engine. In this blog post I will explain how to use Drill REST API to create ascii dashboards using Blessed Contrib. The ASCII Dashboard looks like Prerequisites Node.js Apache Drill 1.2 For this post, you will use the SFO Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

My Indian Adventure - Part 1

Doug Burns - Wed, 2015-12-09 08:29
Last week I had a small adventure and wanted to record some of the events before I forget them and to take the opportunity to praise both the good people of Chennai and the stellar staff of Singapore Airlines. You'll find nothing about Oracle here and unless you're a friend or my family, probably not much to care about, but those are the people I'm writing this for.

I suppose it began the previous week when we received a travel advisory ahead of my short two night business trip warning us of fresh rains and flooding risk in Chennai. I asked my boss if it was really a good idea for us to travel, particularly as I had to be back on Wednesday morning for my partners birthday trip to Phuket. But the decision was made and so I found myself in a somewhat wet Chennai on Sunday night. 

However, other than some occasional rain and the residual effects of earlier flooding - Chennai has been living with this for a while now - the business stuff went well and I woke up at 4am (jetlag) on Tuesday, looking forward to travelling home that night.

Tuesday

Sitting in my final meeting before travelling to the airport, one of the attendees suggested that we break up the meeting as people were getting 
calls from home to tell them that their homes were being flooded! So we broke up, the office cleared out and we phoned for the car to come from our hotel 25 minutes away. Estimated time of arrival 1-2 hours! Oh well. I'd be pushing it to make my flight, but would probably be fine.

We waited and after the first hour I stood outside with an umbrella, sheltering under a 
concrete archway until I'd venture out with the brolly at each possible hotel car sighting. It also gave me an opportunity to smoke outside but under the brolly. However, after an hour of this, I was absolutely drenched and my feet and trousers were soaking. Just me being an idiot as usual, but I would come to regret this more and more as time passed later. Soaking clothes were not ideal for the trip to come and I'd packed extremely lightly!

The car turned up at 6:15 and so began our journey to the hotel and then probably time for a quick beer, dry out a bit and then on to the airport.

We eventually arrived at the hotel 1:45 later and I was starting to panic because Chennai Airport (MAA) is one where arriving 2-3 hours before departure is definitely a good idea. Don't get me started on Indian airport security! I was 3:15 away from departure so after switching to another car to give our poor driver a break, we set off immediately. The next hour and 15 minutes were frankly chaotic and worrying as we passed roads that were now rivers, with cars almost under water and the wake from our own car more like that generated by a boat. Despite a very hairy ending to the drive, we made it to the airport 2 hours before departure and I breathed a huge sigh of relief because I knew I'd probably make it home now.

Except Singapore Airlines wouldn't check me in because the flight was going to be seriously delayed, the situation was changing all the time and they didn't want us stuck air-side. The incoming plane had been diverted to Bangalore (BLR) because MAA runway was closed. If the runway could be reopened, then they would fly the plane in from BLR, board us and we could fly home but it was clear there'd be a long delay in any case. I made the decision it was best to stick around as I really needed to get home but what sealed it was that there were now no rooms at all in the hotel I'd checked out of. I could share my bosses room, but that was the best on offer and all taxis had stopped operating from the airport anyway.

After an hour or two, the flight was cancelled and the runway closed until 6am. Singapore Airlines immediately informed us what was happening and organised hot airline style food and a blanket each. The food was the first of many South Indian meals I was to face over the course of the next few days and those who know me well know that means I was condemned to mild hunger! LOL. Fortunately I had a giant bag of Haribo Gold Bears I could dig into occasionally ;-)

Wednesday

Though the blanket was ok, sleeping on the marble floor of an airport terminal with your rucksack as a pillow and a thin blanket is never going to be an enjoyable experience and I think I managed about an hour. Others who had managed to 
commandeer seats and benches seemed to fair better. Here was my slot - always go Business Class, folks! ;-)




I wandered up and down the terminal aimlessly (and there really isn't much else to do in MAA), occasionally trying to get out of the terminal building through security 
so I could have a smoke. Did I mention how I feel about Indian Security guys? Really, just don't get me started on them!

I was hearing rumours from home that Singapore Airlines were flying a plane in and we would be able to get out so I stuck with it but, ultimately, it became clear that the runway was closed and was going to be closed for some time at which point Singapore stepped in and took control of the situation. They cancelled the flight and organised a bus to the Hilton Chennai where we wouldn't be able to have rooms (there were really none available and they offered to pay the costs of anyone who could find one) but we could at least get some food and get away from MAA. It was yet another great decision as MAA was starting to descend into chaos. After a surprisingly easy and short bus drive, we found ourselves at the Hilton but I wasn't sure how much of a benefit being able to stay in Ballroom 2 for hours was going to be.




Over time I came to realise it was a great move when I started hearing reports of what a car crash the MAA terminal had become. We also had wifi for a few hours, which meant I was able to contact Mads so she could start rebooking our trip to Phuket for the next day, in case I was going to get back to Singapore in time. Our original Wednesday departure was clearly a no-go by this stage.

It also helped that we could now get some decent coffee and biscuits and Singapore and the Hilton could start serving up some really pretty good hot buffet lunch. All South Indian food, of course! But then, what else should I expect really? LOL

But at least there were chairs, and power sockets, and some wifi and even occasionally a little 3G, but Chennai's communications infrastructure was slowly but surely disappearing into the surrounding water! I could go outside, try to find reception, smoke, chat to the Singapore Airlines staff who were taking care of us and two of those trips outside will stay with me for a while. (Note that although the flooding doesn't look too bad here, this was definitely one of the better streets and it got much worse later ...)




The first was when I was smoking with one of the SQ guys (hopefully not something that's disallowed, but I'm not handing his name over anyway! ;-)) and I asked him how he thought things were looking. He showed me a video he'd taken of the runway area and my heart sank. It was a lake. A big lake. With waves and stuff. He told me that realistically, nothing would be flying out of MAA any time soon and my heart sank. At the same time, I settled into the idea that this was going to be a long trip and maybe it's something about my military upbringing but I knew that we'd just have to put up with whatever was coming and we'd get there in the end.

Besides, the next visit outside cheered me up no end. As I was passing the time, smoking and day-dreaming, a commotion broke out in the crowd in the street with people running and pushing and laughing and shouting and I genuinely thought there was a mini-riot breaking out.



We all rushed over to see what was going on and then I realised, but I didn't get a photo of it! The crowd were grappling with a large fish! It must have been a good 2.5-3 feet long and fat. Absolutely not a tiddler! As they caught it, they all ran back up the street, laughing and celebrating with their prize. 

Catching fish in the street with your hands. Now *that's* flooding!

More to follow ....

More OTN VTS OnDemand Highlighted Sessions

OTN TechBlog - Tue, 2015-12-08 13:09

Today we are featuring more sessions from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below for content created by Oracle employees and community members.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming! 

Master Data Management (MDM) Using Oracle Table Access for Hadoop - By Kuassi Mensah, Oracle Corporation
The new Hadoop 2 architecture leads to a bloom of compute engines. Some Hadoop applications such as Master Data Management and Advanced Analytics perform the majority of their processing from Hadoop but need access to data in Oracle database which is the reliable and auditable source of truth. This technical session introduces upcoming Oracle Table Access for Hadoop (OTA4H) which exposes Oracle database tables as Hadoop data sources. It will describe OTA4H architecture, projected features, performance/scalability optimizations, and discuss use cases.  A demo of various Hive SQL and Spark SQL queries against Oracle table will be shown.

What's New for Oracle and .NET (Part 2)  - By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015

How To Increase Application Security & Reliability with Software in Silicon Technology - By Angelo Rajuderai, Worldwide Technology Lead Partner Adoption for SPARC, Oracle and Ikroop Dhillon, Principal Product Manager, Oracle

Learn about Software in Silicon Application Data Integrity (ADI) and how you can use this revolutionary technology to catch memory access errors in production code. Also explore key features for developers that make it easy and simple to create secure and reliable high performance applications.


Real-Time Service Monitoring and Exploration  - By Oracle ACE Associate Robert van Molken
There is a great deal of value in knowing which services are deployed and correctly running on an Oracle SOA Suite or Service Bus instance. This session explains and demonstrates how to retrieve this data using JMX and the available Managed Beans on Weblogic. You will learn how the data can be retrieved using existing Java APIs, and how to explore dependencies between Service Bus and SOA Suite. You'll also learn how the retrieved data can be used to create a simple dashboard or even detailed reports.


Shakespeare Plays Scrabble  - By José Paumard Assistant Professor at the University Paris 13
This session will show how lambdas and Streams can be used to solve a toy problem based on Scrabble. We are going to solve this problem with the Scrabble dictionary, the list of the words used by Shakespeare, and the Stream API. The three main steps shown will be the mapping, filtering and reduction. The mapping step converts a stream of a given type into a stream of another type. Then the filtering step is used to sort out the words not allowed by the Scrabble dictionary. Finally, the reduction can be as simple as computing a max over a given stream, but can also be used to compute more complex structures. We will use these tools to extract the three best words Shakespeare could have played. 



Oracle Cloud – Glassfish Administration (port 4848 woes)

John Scott - Tue, 2015-12-08 04:46

In the previous post I discussed accessing the DBaaS Monitor application, in this post I’ll show how to access the Glassfish Admin application.

On the home page for your DBaaS Instance, you’ll see a link for ‘Glassfish Administration’

cloud_home.png

However if you click on that link you’ll probably find the browser just hangs and nothing happens. It took me a while to notice but unlike the DBaaS monitor which is accessed via HTTP/HTTPs, the Glassfish Administration is done via port 4848 (you’ll notice 4848 in the URL once your browser times out).

The issue here is that by default port 4848 isn’t open in your network rules for your DBaaS instance, so the browser cannot connect to it.

So you have a couple of options –

  1. Open up port 4848 to the world (or to just specific IP addresses)
  2. Use an SSH Tunnel

I tend to go with option 2, since I’ve found occasionally while travelling and staying in a hotel if you go with option #1 you might be accessing from an IP address that isn’t in your whitelist.

As I blogged previously, we can setup an SSH tunnel to port 4848 pretty easily from the terminal, with a command similar to:

ssh -L 4848:localhost:4848 -i oracle_cloud_rsa opc@<my.remote.ip.here>

So now we should be able to access Glassfish using the URL http://localhost:4848

Why localhost? Remember when you setup an SSH tunnel you connect to your own local machine which then tunnels the traffic to the remote host via SSH over the ports you specify.

Once we’ve done that you should be able to access the Glassfish Administation homepage.

glassfish.png

You should be able to login using the username ‘admin‘ and the same password you specified when you created your DBaaS instance.

glassfish2.png

The first thing I noticed was that this is a pretty old version of Glassfish which is installed by default (version 3.1.2.2 in my case), when Glassfish 4 was already out. So you may wish to check if you’re missing any patches or need some Glassfish 4 features.

This is definitely one downside to going with the pre-bundled installation, you will (by definition) get an image which was created some time ago, so you need to check if there are any patches etc that have been released since the image was created.

I’m not going to go into detail on Glassfish itself, since it’s pretty much a standard (3.1) Glassfish and there are lots of blog posts and documents around that go into more detail. However if you go into the application section you’ll see that it comes pre-bundled with the APEX Listener / ORDS and also DBaaS Monitor which is how you can access them via the Glassfish server.

glassfish_apps.png

 


Pages

Subscribe to Oracle FAQ aggregator