Feed aggregator

Twilio Signal Conference – Sessions

Oracle AppsLab - Tue, 2016-06-14 11:34

Lets dive to the Twilio sessions.

The sessions are generally divided in the following 4 tracks:

  • Learn
    See the latest progress in software and cloud communications, talk shop with Twilio engineers who developed them, and get in to the details on how to use the software.
  • Inspire
    Hear from industry experts shaping the future of tech with the latest software.
  • Equip
    Get details on hurdles tricks and solution from Twilio customers on building communications with software APIs.
  • Action
    Define business plans for modern communications with real-life ROI and before-and-after stories.

My interests was more into the Inspire track, and the hot topic being AI and Virtual Assistants nowadays, those were the sessions I targeted for the conference.

IMG_20160525_102406

Twilio messaging integration with other platforms

This half year is just the “half year of virtual assistants”, with the announcements of controversial Tay and Cortana from Microsoft, messenger bot from Facebook, Allo from Google I/O and Siri from WWDC yesterday.  Every giants want to squeeze into the same space and get a share of it.  There were a lot of sessions regarding to bots in Signal, and I had a feeling that Twilio has carefully hand picked the sessions carefully to suit the audiences.  IBM, and Microsoft and Slack all presented their views and technologies with bots, and I learned a lot from them.  It is a bit odd that api.ai sponsored the lunch for the conference and have a booth in the conference, but did not present in any sessions (afaik).

In the schedule, there was a session called Terrible Ideas in Git by Corey Quinn.  I love Git, and when I saw the topic, my immediate reaction was how can anyone say Git was terrible (at least right)??  I just had to go there and take a look.  To my surprise, it was very fun talk show and I had a good laugh and enjoyed it a lot.  I am glad I did not miss that session.

IMG_20160525_163912

Yep, deal with it!

Yep, deal with it!

Possibly Related Posts:

Twilio Signal Conference – All about Twilio

Oracle AppsLab - Tue, 2016-06-14 11:33

This year I attended the Twilio Signal Conference.  Same as its first year, it was held in Pier 27, San Francisco.  It was a 2-day action-packed conference with a keynote session in the morning and sessions after till 6 pm.

The developer experience provided by the conference is superb comparing to a lot of other developer conferences nowadays.  Chartered buses with wifi were provided for commuters using different transits.  Snacks served all day.  6 30-minutes sessions for you to choose from every time slot.  No need to wait in line and you could always attend the sessions you want (sorry Google I/O).  For developers, as least for me, the most important thing was a special coffee stall opened every morning to serve you with a fresh brewed coffee to wake you up and energize you for the rest of the day.  With the CEO among others to code right in front of you in a keynote session to show you some demos, it is one true developer conference that you could hope for.

20160524_111712

The whole conference lit up by the Internet of Blings with live code demonstration on stage.

There were a lot of new products and features Twilio announced in Signal and I would not spend to time to recap here.  You may read more info here and here.  The interesting thing to note is how Twilio gets so huge.  It started off with a text messaging service, it now also provides services on video, authentication, phone, routing.  It is the power engine under the hood for the fast growing companies like Lyft and Uber.  It now offers the most complete messaging platform for developers to connect to their users.  It now has capabilities to reroute your numbers and tap into the phone conversations.  It partners with T-Mobile to get into the IoT domain.  Twilio’s ambition and vision is not small at all.  The big question is:  how Twilio achieve all these?  This question can be controversial, but for me, I would have to say it all boils down into simplicity:  making things really easy, really good, and just works.  The Twilio APIs are very easy to use and it does exactly what it says, no more, no less.  Its reliability is superb.  That is what developers want and rely on.

20160524_094925

Jeff Lawson talks about Twilio’s infrastucture

Twilio as a messaging hub

Twilio as a messaging hub

But wait, there’s more. Check out my thoughts on the sessions at Signal and my $Bash night experience. I almost won a chance to play with the mysterious Magic Leap, and I might yet get access for finishing second. Stay tuned.Possibly Related Posts:

Services -- 1 : Services in non-RAC 12c MultiTenant

Hemant K Chitale - Tue, 2016-06-14 10:22
It is generally accepted that service definition is required in RAC environments.

However, the concept of Services was made available in 8i --- predating RAC.   Services can be defined in non-OPS / non-RAC / non-MultiTenant / MultiTenant environments.  A single PDB in a 12c MultiTenant database can host multiple services.

A quick start to the implementation.

Note : srvctl is to be used to create and manage services in a RAC environment.  srvctl registers the services with the Cluster Registry.  In a Non-RAC environment, use DBMS_SERVICE.

First, no services are running :

[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:14:48

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 0 min. 19 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


Next, I startup the MultiTenant CDB database.

[oracle@ora12102 ~]$ . oraenv
ORACLE_SID = [CDB1] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:15:47 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1056968224 bytes
Database Buffers 570425344 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>


[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:17:23

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 2 min. 54 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


So, now I have the default pdb1 srevice for pluggable database PDB1 running.

Next, I manually create two new services and start them.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 ~]$ sqlplus 'sys/oracle@PDB1 as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:20:30 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>
SQL> exec dbms_service.create_service(service_name=>'NEW_APP1',network_name=>'NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('NEW_APP2','NEW_APP2');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>
[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:22:54

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 8 min. 26 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$



The two new services NEW_APP1 and NEW_APP2 that I created in PDB1 are now registered with the listener.  Remember that these services are in the Database Instance CDB1.

My next blog post will be about using these services.

.
.
.

Categories: DBA Blogs

Using the Oracle JET QuickStart Template

Andrejus Baranovski - Tue, 2016-06-14 09:38
I will show in the post, how you can create single page applications with multiple modules in Oracle JET. As a starting point, you should use the Oracle JET QuickStart template, read about it here. This template comes with sample structure and you could follow it to split application functionality into different groups.

Download sample application, which is based on Oracle JET QuickStart template - JETCRUDApp_v10.zip (you must run ADF BC REST application in JDEV 12.2.1 and JET in NetBeans 8). QuickStart application is extended with new module, it contains CRUD implementation in JET (read more here).

I have slightly modified template and removed boxes for navigation and complimentary areas. It renders main content block stretched on the screen:


Under Employees tab, CRUD functionality to manage employees data is rendered. This includes CRUD functionality described in my previous posts about JET (see link above). Now it is part of real JET application:


Let's take a look into structure. QuickStart template comes with structure, where each module is separated into HTML/JavaScript files. There is a group available for employees CRUD - employees.html and employees.js:


Main page logic is implemented in index.html. This file includes page structure and references block with dynamic module (loaded through menu structure):


List of available modules is registered in main.js file router:


JET CRUD functionality use case HTML part is copied into employees.html file (this will be loaded through main.js router):


JavaScript part is copied into employees.js file:


JavaScript logic is changed to handle table row selection event through JavaScript listener method. Constructed employees view model is returned to be accessible from the router:

APEX Modify UT Breadcrumb

Denes Kubicek - Tue, 2016-06-14 06:28
While creating pages with the new UT I noticed that the breadcrumb breakes after the second level and this takes too much space. This example shows how to go arround that problem with just a couple of css styling lines.



And this is how it looks like after applying the css.

Categories: Development

Oracle CURRENT_DATE Function with Examples

Complete IT Professional - Tue, 2016-06-14 06:00
The Oracle CURRENT_DATE function is another Oracle function for returning a date from the system. Find out what it does and how it’s different from the other functions in this article. Purpose of the Oracle CURRENT_DATE Function The CURRENT_DATE function is used to return the current date in the session timezone. The session timezone is […]
Categories: Development

Oracle Joins the White House in Global Campaign to Empower Girls and Women

Oracle Press Releases - Tue, 2016-06-14 06:00
Press Release
Oracle Joins the White House in Global Campaign to Empower Girls and Women Pledges $3 Million to Educate Girls in Science, Technology, Engineering and Math

Redwood Shores, Calif.—Jun 14, 2016

embedBrightcove('responsive', false, 'single', '4939491263001');

President Obama endorses Oracle’s work in advancing computer science and STEM education globally.

On the heels of Oracle’s $200 million commitment to support Computer Science (CS) Education for All, today the White House announced Oracle’s additional $3 million investment to immerse girls worldwide in science, technology, engineering and math (STEM). The company’s commitment of direct and in-kind funding supports “Let Girls Learn,” a U.S. government initiative aimed at helping adolescent girls around the world go to school and stay in school.

“It takes 25 years to build a computer engineer, not 25 hours, so we need to get started,” said Safra Catz, Oracle CEO. “The computer sciences lose too many girls too early and once lost, it’s nearly impossible to get them back. We want more girls focused on building upon science and math fundamentals and we want more women choosing the technical disciplines because they are both prepared to do so and because they believe it will advance their career opportunities.”

Oracle will offer more than 65 educational events and reach over 55,000 young girls globally through a powerful nexus of its corporate social responsibility programs spanning Oracle Academy, Oracle Education Foundation, Oracle Giving and Volunteers, Oracle Women’s Leadership (OWL), and Oracle Diversity and Inclusion. Events will include summer computing camps, codefests, workshops and conferences designed to encourage and inspire adolescent girls to become original thinkers, creative designers and enterprising trailblazers.

Additionally, Oracle plans to expand its CS efforts in Egypt with an additional investment of nearly $1 million in educational resources and services over the next four years. The commitment is part of a new collaboration between the Ministry of Education in Egypt, the United States Agency for International Development (USAID) and Oracle Academy, Oracle’s philanthropic educational program that impacts more than 2.6 million students in 106 countries. The partnership will support computing education in nine newly-created STEM schools throughout the country, including one boarding school exclusively for girls which will accept up the top 10 percent of girls across the governorates, reaching 150 girls each year and providing three years of paid education for each girl.

As part of the company’s global campaign to support girls and women in technology, Oracle will drive several notable projects:

  • Oracle Academy will team with Arizona State University and others under the USAID Build-IT project to help women in Vietnam develop into IT leaders. President Obama recently endorsed Oracle’s contribution in furthering this effort.
  • Oracle Education Foundation and Oracle Volunteers will team to teach girls coding, electrical engineering and project management through girls-only workshops for Design Tech High School (d.tech), an innovative, free California public high school. Oracle is building d.tech’s new facility at its headquarters in Redwood Shores, Calif., making d.tech the world’s first public high school on a technology campus.
  • Oracle Giving and Oracle Academy will award grants and sponsorships globally to nonprofit organizations striving to increase girls’ access to educational opportunities and encourage them to pursue degrees in computer science and STEM fields.
  • Oracle Giving will continue its support for MentorNet, which engages STEM professionals in the virtual mentoring of undergraduates, 66% of whom are women.
 
Contact Info
Julie Sugishita
Oracle Corporate Communications
1.650.506.0076
julie.sugishita@oracle.com
About Oracle Academy

As Oracle’s flagship philanthropic educational program, Oracle Academy advances computer science education globally to drive knowledge, innovation, skills development, and diversity in technology fields. To this end, Oracle Academy offers students and educational institutions a free and complete portfolio of software, curriculum, hosted technology, faculty trainings, support, and certification resources. The program works with public and private partners to provide the tools educators need to engage, inspire and prepare students to become innovators and leaders of the future. Through Oracle Academy, students receive hands-on experience with the latest technologies, helping make them college and career ready in the era of big data, cloud computing, the Internet of Things, and beyond.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Julie Sugishita

  • 1.650.506.0076

Checkbox Item check / uncheck all

Jeff Kemp - Tue, 2016-06-14 04:00

If you have an ordinary checkbox item based on a list of values, here is a function which will set all the values to checked or unchecked:

function checkboxSetAll (item,checked) {
  $("#"+item+" input[type=checkbox]").attr('checked',checked);
  $("#"+item).trigger("change");
}

For example:

checkboxSetAll("P1_ITEM", true); //select all
checkboxSetAll("P1_ITEM", false); //select none

It works this way because a checkbox item based on a LOV is generated as a set of checkbox input items within a fieldset.

Note: If it’s a checkbox column in a report, you can use this trick instead: Select All / Unselect All Checkbox in Interactive Report Header


Filed under: APEX Tagged: APEX, javascript, jQuery, tips-&-tricks

database options - advanced security

Pat Shuff - Tue, 2016-06-14 02:07
Advanced Security and Transparent Data Encryption (TDE) stops would-be attackers from bypassing the database and reading sensitive information from storage by enforcing data-at-rest encryption in the database layer. Earlier when we talked about partitioning and compression we talked about tablespace files and how to manage them. If these files are stored in clear text, anyone who can access our file system could theoretically read the dbf file and do a clear text search. If, for example, we have a credit card number "1234 5678 9012 3456". We could find this string with a strings, grep, or od statement (octal dump to show text data). If we don't know the credit card number we can just do a dump on the data and see the values stored in the database. With enough trial and error we can figure out what the structure of the database might be and correlate names to credit card information. By default, all data stored in the cloud has TDE enabled. This is done in the $ORACLE_HOME/network/admin/sqlnet.ora file. If you look at this this file, you will see that the ENCRYPTION_WALLET_LOCATION is defined as well as the SQLNET.ENCRYPTION_TYPES_SERVER is defined. When the database is created, a wallet is generated based on your ssh keys allowing you to access your data. In the database that we created we have the wallet file location in /u01/app/oracle/admin/ORCL/tde_wallet. There is a file called cwallet.sso that is enabled anytime someone connects to the database through port 1521 or through the sqlplus command. If we rename this file to something else, we can connect to the database and create clear text files. Note that it is not recommended that you do this but we are doing this to highlight first the differences between Iaas and PaaS as well as the need for data encryption in the cloud. With a database installation on top of compute as is done with EC2, Azure Compute, and Oracle Compute, we have to enable TDE, configure the wallet, configure the cwallet.sso. With PaaS, this is all done for you and you can manage the keystore and rotate key access.

Note that it is not recommended that you execute these commands. They will desecure your database and allow for clear text storage and transmission of data across the internet. We are doing this as an example.

cat $ORACLE_HOME/network/admin/sqlnet.ora
cd /u01/app/oracle/admin/ORCL/tde_wallet
ls
This should allow you to see the cwallet.so file which enables automatic wallet connection upon login. If we want to change encryption we can first look at the parameter encrypt_new_tablespaces and see that it is set to CLOUD_ONLY which encrypts everything. We want to change this to DDL which says that we only encrypt if we tell it to. We first want to create a tablespace and a banking user with encryption turned on. This is done with
sqlplus / as sysdba
alter session set container=PDB1;
drop tablespace banking including contents and datafiles;
create tablespace banking datafile '/u02/app/oracle/oradata/ORCL/PDB1/banking.dbf' size 20m;
We then create a banking user as well as a paas_dba user. One is used to create an encrypted table and the other is to create a clear text table
drop user banking;
create user banking identified by "PaasBundle_1" default tablespace banking; 
grant create session to banking;
grant unlimited tablespace to banking;
drop user paas_dba;
create user paas_dba identified by "PaasBundle_1";
grant create session to paas_dba;
grant dba to paas_dba;

We now connect to the PDB1 as the banking user and create a table in our encrypted tablespace

connect banking/PaasBundle_1@PDB1
create table banking_customers (first_name varchar(20), last_name varchar(20), ccn varchar(20)) tablespace banking;
insert into banking_customers values('Mike','Anderson','5421-5424-1451-5340');
insert into banking_customers values('Jon','Hewell','5325-8942-5653-0031');
commit;
alter system flush bufffer_cache;
select ccn from banking_customers;
This should create a table with two entries. The table will be encrypted and stored in the banking.dbf file. If we do a string search from this file we will not find the credit card number starting with 5421. Now that we have an encrypted table created we need to reconnect to the database and disable encryption on new tables. To do this we change the parameter from CLOUD_ONLY to DDL as sysdba.
sqlplus / as sysdba
show parameter encrypt_new_tablespaces;
alter system set encrypt_new_tablespaces=DDL SCOPE=BOTH;
We can now create a new tablespace and the contents are only encrypted if we pass in the encrypt statement with the create statement. The tablespace will not be encrypted by default. We do this operation as paas_dba who has dba rights to create a tablespace and table.
connect paas_dba/PaasBundle_1@PDB1;
drop tablespace bankingCLEAR including contents and datafiles;
create tablespace bankingCLEAR datafile '/u02/app/oracle/oradata/ORCL/PDB1/bankingCLEAR.dbf' size 20m;
create table banking_Clearcustomers tablespace bankingCLEAR as select * from banking_customers;
select ccn from banking_Clearcustomers;
We should get back two entries from both select statements and see two credit card numbers. We can then exit sqlplus and look at the banking.dbf and bankingCLEAR.dbf files to see if we can lift credit cards. By executing the
strings bankingCLEAR.dbf | grep 5421
strings banking.dbf | grep 5421
we see that we get the credit card number from the bankingCLEAR.dbf file. The data is inserted clear text. It is important to remember that all data should be encrypted in motion and at rest. We need to change the parameter back to CLOUD_ONLY for the encrypt_new_tablespaces moving forward. By default we connect to the data using the encryption wallet. We can disable this as well as turning off default encryption.

In summary, we have looked at what it takes to encrypt data at rest in the cloud. By default it is turned on and we don't have to do anything with platform as a service. If we are using infrastructure as a service we need to purchase the Advanced Security option, turn on encrypting tablespaces bu changing a parameter, enable the wallet to our login, and install keys for the wallet. Platform as a service provides levels typically above and beyond what most customers have in their data center. The recent credit card loss that happened at Target a few years ago happened because they owned the Advanced Security option for the database but did not turn on the feature. An outside consultant (working on something non-it related) got access to a shared storage and pulled the dbf files onto a USB drive. They were able to get thousands of credit cards from the data center costing the CIO and IT staff their jobs. Today we learned how to turn off TDE in the cloud to illustrate what it takes to turn it on in your data center and we looked at the simplicity of pulling data from a dbf file if we know the data we are looking for. We could just as easily have just looked for number patterns and peoples names and correlated the two as valid credit card numbers.

I would like to thank the GSE team at Oracle for doing 90% of the work on this blog. I liberally hijacked the demo scripts and most of the content from demo.oracle.com, PaaS - Data Management (Solutions Demo) by Brijesh Karmakar. The GSE team creates demo environments and scripts for specific products. This demo is an Oracle internal demo and can be requested from your pre-sales consultant in your area. I took the demo code from the Database Vault and Encryption_demo_script written on 08-Jun-2016. I have to admit that looking for demo scripts on demo.oracle.com and searching for TDE was very opportune given that I wrote this blog on the 9th and it was published on the 8th.

Oracle Cloud Helps Utilities Make Better, Faster Business Decisions

Oracle Press Releases - Tue, 2016-06-14 02:05
Press Release
Oracle Cloud Helps Utilities Make Better, Faster Business Decisions New analytics capabilities enable better data collection and analysis to improve business insights

EEI Annual Convention - Chicago, IL—Jun 14, 2016

The utility industry is in the midst of a significant transformation, and the increasing prevalence of digital technologies is driving organizations to embrace innovation. Capturing and analyzing data across infrastructures now play a crucial role in addressing changes and challenges. Today, Oracle Utilities announced new releases of two of its cutting edge analytics solutions, which enable utilities to better access, organize and analyze the massive amount of data they collect while saving time and reducing IT and maintenance costs.
 
Oracle Utilities Analytics Cloud Service helps utilities easily integrate their on-premise customer, meter, device, and workforce data into a single platform in the cloud. The cloud solution’s flexibility and scalability enables utilities of all sizes to leverage powerful data analytics without the need for in-house analytics experts. The cloud “future-proofs” smart grid and analytics strategies—enabling utilities to leverage the latest analytics capabilities to remain at the forefront of industry innovation. This new cloud service further enhances Oracle Utilities’ cloud analytics suite, which includes Oracle DataRaker, an advanced analytics solution providing actionable insight and predictive analytics services for big data.
 
“As a leading systems integrator and Oracle PartnerNetwork member, we worked closely with the Oracle Utilities team on co-development activities for Oracle Utilities Analytics,” said John Gustafson, Vice President, Energy and Utilities at Cognizant.
 
“This solution can enable utilities around the world to leverage robust analytics capabilities and gain immediate access to valuable insight. Drawing on Cognizant’s extensive global experience and expertise in implementing Oracle Business Intelligence and working with Oracle Cloud, we are continuing to develop new implementation resources for Oracle Utilities Analytics Cloud Service for utilities worldwide.”
 
The new version of Oracle Utilities Work and Asset Management Analytics provides users with an extensive platform featuring pre-built analytics based on industry best practices to provide the most value to utilities for asset maintenance strategies. The upgraded solution offers:
  • • Four new dashboards with 26 pages of analytics to support 74 metrics related to work and asset management questions—enabling users to drill down even further into the data associated with these analytics.
  • • Hundreds of different report outputs based on organization hierarchy. Each report has the ability to provide answers to any level of the organization.
 
“Utilities today collect massive amounts of data and they do not always have the means to analyze it and turn it into actionable business intelligence,” said Rodger Smith, senior vice president and general manager, Oracle Utilities. “Oracle Utilities Analytics Cloud Service and Oracle Utilities Work and Asset Management Analytics provide utilities with the ability to analyze their data more quickly, and in a more meaningful way.”
Contact Info
Samantha Thai
Oracle
+1.510.917.5564
samantha.thai@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Samantha Thai

  • +1.510.917.5564

You Scratch Your Head And Ponder Why It Is You Go With Maximum Core Count Xeons. I Can’t Explain That, But This Might Help.

Kevin Closson - Tue, 2016-06-14 00:36

Folks that have read my blog for very long know that I routinely point out that Intel Xeon processors with fewer cores (albeit same TDP) get more throughput per core. Recently I had the opportunity to do some testing of a 2-socket host with 6-core Haswell EP Xeons (E5-2643v3) connected to networked all-flash storage. This post is about host capability so I won’t be elaborating on the storage. I’ll say that it was block storage, all-flash and networked.

Even though I test myriads of systems with modern Xeons it isn’t often I get to test the top-bin parts that aren’t core-packed.  The Haswell EP line offers up to 18-core parts in a 145w CPU.  This 6-core part is 135w and all cores clock up to 3.7GHz–not that clock speed is absolutely critical for Oracle Database performance mind you.

Taking It For a Spin

When testing for Oracle OLTP performance the first thing to do is measure the platform’s ability to deliver random single-block reads (db file sequential read). To do so I loaded 1TB scale SLOB 2.3 in the single-schema model. I did a series of tests to find a sweet-spot for IOPS which happened to be at 160 sessions. The following is a snippet of the AWR report from a 5-minute SLOB run with UPDATE_PCT=0. Since this host has a total of 12 cores I should think 8KB read IOPS of 625,000 per second will impress you. And, yes, these are all db file sequential reads.

load-profile-625K-IOPS

At 52,093 IOPS per CPU core I have to say this is the fastest CPU I’ve ever tested. It takes a phenomenal CPU to handle this rate of db file sequential read payload. So I began to wonder how this would compare to other generations of Xeons. I immediately thought of the Exadata Database Machine data sheets.

Before I share some comparisons I’d like to point out that there was a day when the Exadata data sheets made it clear that IOPS through the Oracle Database buffer cache costs CPU cycles–and, in fact, CPU is often the limiting factor. The following is a snippet from the Exadata Database Machine X2 data sheet that specifically points out that IOPS are generally limited by CPU. I know this. It is, in fact, why I invented SLOB way back in the early 2000s. I’ve never seen an I/O testing kit that can achieve more IOPS per DB CPU than is possible with SLOB.

X2-IOPS-CPU

Oracle stopped using this foot note in the IOPS citations for Exadata Database Machine starting with the X3 generation. I have no idea why they stopped using this correct footnote. Perhaps they thought it was a bit like stating the obvious. I don’t know. Nonetheless, it is true that host CPU is a key limiting factor in a platform’s ability to cycle IOPS through the SGA. As an aside, please refer to this post about calibrate_io for more information about the processor ramifications of SGA versus PGA IOPS.

So, in spite of the fact that Oracle has stopped stating the limiting nature of host CPU on IOPS, I will simply assert the fact in this blog post. Quote me on this:

Everything is a CPU problem

And cycling IOPS through the Oracle SGA is a poster child for my quotable quote.

I think the best way to make my point is to simply take the data from the Exadata Database Machine data sheets and put it in a table that has a row for my E5-2643v3 results as well. Pictures speak thousands of words. And here you go:

Exadata-compare-12c-HSW_EP

AWR Report

If you’d like to read the full AWR report from the E5-2643v3 SLOB test that achieved 625,000 IOPS please click on the following link: AWR (click here).

References

X2 data sheet
X3 data sheet
X4 data sheet
X5 data sheet
X6 data sheet

 


Filed under: oracle

Recursive WITH, part III: IS_LEAF

OraFAQ Articles - Mon, 2016-06-13 14:25

The CONNECT BY syntax provides a useful pseudocolumn, CONNECT_BY_ISLEAF, which identifies leaf nodes in the data: it’s 1 when a row has no further children, 0 otherwise. In this post, I’ll look at emulating this pseudocolumn using recursive WITH.

read more

The Cost of Doing Nothing

Kubilay Çilkara - Mon, 2016-06-13 12:18
For a business to become optimally successful, it absolutely must incorporate a quality life-cycle management system.  This begs the question:  Why do so many vendors miss the mark when it comes to providing the necessary updates and enhancements?  Developers and software companies should embrace their respective ALM systems as their staunch allies; and progressive IT organizations stay well ahead of the game by using progressive technology and best practices to ensure that high-quality products are on time and on budget while remaining fully compliant.  The goal of any ALM supplier should be to cater to its clients by properly supporting them by staying abreast of platform enhancements and being familiar with new languages, new devices, mobile demands, ever-changing compliance regulations and other real-time demands that must be continually addressed.

The bottom line remains:  in order for development leaders to not only survive, but thrive, they must make the transition to the most-updated ALM solution, possible.  Surprisingly, however, development leaders can be hesitant to utilize a modern ALM solution; but the cost of doing nothing can be more expensive than one might imagine. 

There are a handful of misguided reasons why an updated ALM solution might not be employed.  A few of those fallacies can include the following:

A New ALM Solution Would Be Too Cost-Prohibitive

Being the lead dog and staying ahead of the pack in this competitive world is absolutely paramount which is why a vendor must provide the crucial components such as product enhancements, platform updates, etc.  Research reveals some unsettling data:

  • 84% of IT projects become overdue or over budget
  • 31% of IT projects never reach completion due to being canceled
  • Completed IT projects deliver, on average, only 42% of their expected benefits

Accuracy and efficiency become the name of the game as it applies to profit; but developers' profit margins will be sorely compromised without up-to-date functionality and access to current tools via an up-to-date ALM system.  Additionally, if no automated system is integrated, IT will be forced to spend a good deal of valuable time addressing compliance-related issues; and that can be costly. 

Your vendor's R&D department should certainly be acutely aware of new trends in the industry as well as responsive to customer’s requests.  A coveted ALM solution will incorporate 1) on-board templates for compliance reporting 2) compatibility and remote access with any mobile device 3) tools such as dashboards, real-time reports & analytics and automated work-flows –  all, of which, enable every team-member to stay up-to-date. 

The cost of doing nothing can take a titanic toll when one considers that not meeting app-release time-lines as well as opportunities that become lost in the shuffle plus valuable time addressing compliance concerns and/or audits all cost a business, big-time!  The question, then, becomes obvious:  You believe you can't afford the integration of a modern ALM solution – but can you afford NOT to??

Our Current ALM Solution Seems to be Working Fine
In order to effectively, efficiently and optimally monitor, manage and coordinate all the people, processes, changes and dynamics that are intricately involved in the application life-cycle, utilizing the most sophisticated ALM solution is key!  Development personnel feel the demands of deploying functionality and fixes, very quickly.  The IT setting is extremely complex; and in this environment, database servers, web servers, diverse clientele and every type of mobile device equate to sophisticated development and release processes.  All this must be intricately orchestrated without a hitch; and a modern ALM solution is what it takes to fully ensure a flawless and seamless operations in every department. 

With the most modern ALM solution, users can enjoy the ease at which systems and work-flows come together in addition to the minimization of production errors and the maximization of collaboration efforts.  Then, imagine all this coupled with data access from any mobile device, compliance reports with point-and-click ease and automation processes that are as easy as child's play.

Older ALM solutions are just that 'old' and with that, comes the inability for an archaic solution tool to offer the newest of technologies which equates to lost time due to fixing bad code and dealing with coding errors, as only a single example.  And then, of course, there is the lost revenue.  In the end, the growth of a company is stifled.  Again, a modern ALM solution keeps a business in position as the 'alpha' and leader of the competitive pack since the people and processes involved are all humming like a fine-tuned engine – no, barricades, no inefficiency and virtually no errors.

Transitioning to a New ALM Would Be Too Time-Consuming

How one chooses a vendor can make the difference between reaping the benefits of a dedicated and seasoned professional with an unparalleled product that he or she is excited to share, verses a vendor whose interest in your goals and progress is marginal, at best.  Assuming the right vendor has been selected, the time required to get the system fully running will be miniscule.  Personnel can very quickly enjoy immediate visibility, coordination and management across distributed systems, teams and tools.  In the end, previously-lost revenue due to outdated ALM systems becomes a distant memory since teams will no longer contend with drawn-out, manual processes but will, now, have the updated abilities to very quickly communicate, collaborate, update etc. regarding any and all application projects. 

Not one single team-member needs to concern him or herself with transitioning into an updated system.  A committed vendor will make sure the necessary and expected support is entirely available for everyone involved.  Again, in the end, any time invested in becoming familiar with a new ALM solution will begin to immediately pay for itself due to optimized usability involving real-time visibility, flexibility, accuracy and automation.

Our Current ALM Serves Only Development

When a business chooses stagnation over progress, it can become the 'kiss of death' for the organization.  Because technology will never slow down or even reach an apex, a business absolutely must stay on track with innovative ideas, processes and insights.  An integrated ALM system ensures that users can take full advantage of managing, in real-time, every aspect of development and delivery.  A top-tier ALM solution will provide instantaneous updates on every component ranging from code to work-flow to dashboards and everything in-between and beyond.  Smarter, more-insightful decisions become common-place among everyone involved – whether development personnel, auditors, programmers, etc.  Since DevOps departments evolve and advance in the enterprise, so too, must the ALM system by functioning as the centralized collaborative arena where inter-department communications are available whenever and wherever required.

After it's all said and done, switching to a modern ALM solution will, realistically, save money over the long haul since time is being dramatically saved – and time is money!  Those few words serve as a cliché as well as a fact.  Whether one is speaking of departments collaborating on changes at any level, or enhanced visibility that maximizes work-flow or whether one is talking about users gaining advanced capabilities resulting in succinct, precise and quick decision-making, it all adds up, once again, to saving copious amounts of time which translates into saving impressive amounts of revenue.

A reliable vendor will provide the kind of support one would expect from a supplier that operates as a top-tier contender in the industry.  Vendor support should include:

  • Access to the most up-dated interfaces and devices
  • Assistance with any existing OS
  • Intervention for all platforms, on which, code is being developed
  • Mobile and web development
  • Out-of-the-box plug-ins to converge with other tools
  • Compliance-report templates
  • Delivery of single-screen visibility with all IT involvement
  • Adjustable point-and-click distribution and deployment and mobile functionality with everything

It is an ever-changing business climate where technology is king.  And...

                                            Adaptation equals growth and growth equals SUCCESS!    


About the author: Daniel Magid is Rocket’s IBM i solution leader and Director of the Rocket Application Lifecycle Management (ALM) and DevOps lab. Having started his career at IBM in 1981 in the midrange computer division, Daniel brings to Rocket Software more than 30 years of experience in the IBM midrange marketplace. Prior to coming to Rocket as part of the acquisition of Aldon in 2011, Daniel was Aldon’s CEO and Chief Product Strategist. Daniel led the growth of Aldon from a small 4 person consulting company to the largest provider of ALM and DevOps solutions in the IBM i market. Daniel is a recognized expert in application development and DevOps in the IBM i market and a well-known presence at IBM i conferences.




Categories: DBA Blogs

Gluent New World #04: Next Generation Oracle Database Architectures using Super-Fast Storage with James Morle

Tanel Poder - Mon, 2016-06-13 11:30

It’s time to announce the 4th episode of Gluent New World webinar series by James Morle! James is a database/storage visionary and has been actively contributing to Oracle database scene for over 20 years – including his unique book Scaling Oracle 8i that gave a full-stack overview of how different layers of your database platform worked and performed together.

The topic for this webinar is:

When the Rules Change: Next Generation Oracle Database Architectures using Super-Fast Storage

Speaker:

  • James Morle has been working in the high performance database market for 25 years, most of which has been spent working with the Oracle database. After 15 years running Scale Abilities in the UK, he now leads the Oracle Solutions at DSSD/EMC in Menlo Park.

Time:

  • Tue, Jun 21, 2016 12:00 PM – 1:15 PM CDT

Abstract:

  • When enabled with revolutionary storage performance capabilities, it becomes possible to think differently about physical database architecture. Massive consolidation, simplified data architectures, more data agility and reduced management overhead. This presentation, based on the DSSD D5 platform, includes performance and cost comparison with other platforms and shows how extreme performance is not only for extreme workloads.

Register here:

This should be fun! As usual, I’ll be asking some questions myself and the audience can ask questions too. See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

Rittman Mead Consulting - Mon, 2016-06-13 09:19

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets, particularly when used in conjunction with Jupyter Notebooks. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you’d find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2. In this article we’ll see how to configure BDD Shell on Big Data Lite 4.5 (along with Jupyter Notebooks), and in a subsequent post dive into how to actually use them.

Setting up BDD Shell on Big Data Lite

You can find the BDD Shell installation document here.

Login to BigDataLite 4.5 (oracle/welcome1) and open a Terminal window. The first step is to download Anaconda, which is a distribution of Python that also includes “[…] over 100 of the most popular Python, R and Scala packages for data science” as well as Jupyter notebook, which we’ll see in a moment.

cd ~/Downloads/
wget http://repo.continuum.io/archive/Anaconda2-4.0.0-Linux-x86_64.sh

Then install it: (n.b. bash is part of the command to enter)

bash Anaconda2-4.0.0-Linux-x86_64.sh

Accept the licence when prompted, and then select a install location – I used /u01/anaconda2 where the rest of the BigDataLite installs are

Anaconda2 will now be installed into this location:
/home/oracle/anaconda2

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/oracle/anaconda2] >>> /u01/anaconda2

After a few minutes of installation, you’ll be prompted to whether you want to prepend Anaconda’s location to the PATH environment variable. I opted not to (which is the default) since Python is used elsewhere on the system and by prepending it it’ll take priority and possibly break things.

Do you wish the installer to prepend the Anaconda2 install location
to PATH in your /home/oracle/.bashrc ? [yes|no]
[no] >>> no

Now edit the BDD Shell configuration file (/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.conf) in your favourite text editor to add/amend the following lines:

SPARK_EXECUTOR_PYTHON=/u01/anaconda2/bin/python
LOCAL_PYTHON_HOME=/u01/anaconda2

Amend the path if you didn’t install Anaconda into /u01

In the same configuration file, add/amend:

SPARK_HOME=/usr/lib/spark/
SPARK_EXTRA_CLASSPATH=/usr/lib/oozie/oozie-sharelib-yarn/lib/spark/spark-avro_2.10-1.1.0-cdh5.7.0.jar

Now run the BDD Shell setup:

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/setup.sh

This should succeed:

[bigdatalite.localdomain] Validating pre-requisites...
[bigdatalite.localdomain] Validation Success
[bigdatalite.localdomain] Setting up BDD Shell...
[bigdatalite.localdomain] Setup Success
[oracle@bigdatalite Downloads]$

Assuming it does, you can now launch the shell bdd-shell.sh:

[oracle@bigdatalite Downloads]$ /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.sh
WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).
WARNING: Running spark-class from user-defined location.
spark.driver.cores is set but does not apply in client mode.
Welcome to
     ___   ___   ___       __   _     ____  _     _
    | |_) | | \ | | \     ( (` | |_| | |_  | |   | |
    |_|_) |_|_/ |_|_/     _)_) |_| | |_|__ |_|__ |_|__

SparkContext available as sc, HiveContext available as sqlContext.
BDD Context available as bc.

>>>

From the BDD Shell you can interact with BDD, for example to list out the datasets currently defined in the Catalog:

>>> bc.datasets().count
17
>>> for ds in bc.datasets():
...     print ds
...

media_demo_customer     edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        Hive    default.media_demo_customer

movie_genre     default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        Hive    default.movie_genre

media_demo_customer     default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        Hive    default.media_demo_customer

Whilst BDD Shell is command-line based, there’s also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive “Notebook”. This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a ‘Notebook’ enables you to modify and re-run commands, and then once correct, retain them as a fully functioning script for future use.

To launch it, run:

cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It’s important that you run this from the bdd-shell folder, otherwise the BDD shell won’t initialise properly
  • By default Jupyter uses 8888, which is already in use on BigDataLite by Hue, so use a different one by specifying --port
  • Jupyter by default only listens locally, so you need to either be using BigDataLite desktop to run Firefox, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and you should see the default Jupyter screen with a list of files:

In the next article, we’ll see how to use Jupyter Notebooks with Big Data Discovery, and get an idea of just how powerful the combination can be.

The post Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5 appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

Rittman Mead Consulting - Mon, 2016-06-13 08:19
Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets, particularly when used in conjunction with Jupyter Notebooks. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you'd find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2. In this article we'll see how to configure BDD Shell on Big Data Lite 4.5 (along with Jupyter Notebooks), and in a subsequent post dive into how to actually use them.

Setting up BDD Shell on Big Data Lite

You can find the BDD Shell installation document here.

Login to BigDataLite 4.5 (oracle/welcome1) and open a Terminal window. The first step is to download Anaconda, which is a distribution of Python that also includes "[...] over 100 of the most popular Python, R and Scala packages for data science" as well as Jupyter notebook, which we'll see in a moment.

cd ~/Downloads/  
wget http://repo.continuum.io/archive/Anaconda2-4.0.0-Linux-x86_64.sh
Then install it: (n.b. bash is part of the command to enter)
  
bash Anaconda2-4.0.0-Linux-x86_64.sh  

Accept the licence when prompted, and then select a install location - I used /u01/anaconda2 where the rest of the BigDataLite installs are

Anaconda2 will now be installed into this location:  
/home/oracle/anaconda2

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/oracle/anaconda2] >>> /u01/anaconda2

After a few minutes of installation, you'll be prompted to whether you want to prepend Anaconda's location to the PATH environment variable. I opted not to (which is the default) since Python is used elsewhere on the system and by prepending it it'll take priority and possibly break things.

Do you wish the installer to prepend the Anaconda2 install location  
to PATH in your /home/oracle/.bashrc ? [yes|no]  
[no] >>> no

Now edit the BDD Shell configuration file (/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.conf) in your favourite text editor to add/amend the following lines:

  
SPARK\_EXECUTOR\_PYTHON=/u01/anaconda2/bin/python  
LOCAL\_PYTHON\_HOME=/u01/anaconda2  

Amend the path if you didn't install Anaconda into /u01

In the same configuration file, add/amend:

  
SPARK_HOME=/usr/lib/spark/  
SPARK\_EXTRA\_CLASSPATH=/usr/lib/oozie/oozie-sharelib-yarn/lib/spark/spark-avro_2.10-1.1.0-cdh5.7.0.jar  

Now run the BDD Shell setup:

  
/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/setup.sh

This should succeed:

  
[bigdatalite.localdomain] Validating pre-requisites...
[bigdatalite.localdomain] Validation Success
[bigdatalite.localdomain] Setting up BDD Shell...
[bigdatalite.localdomain] Setup Success
[oracle@bigdatalite Downloads]$

Assuming it does, you can now launch the shell bdd-shell.sh:

  
[oracle@bigdatalite Downloads]$ /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.sh
WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).  
WARNING: Running spark-class from user-defined location.  
spark.driver.cores is set but does not apply in client mode.  
Welcome to  
     ___   ___   ___       __   _     ____  _     _
    | |_) | | \ | | \     ( (` | |_| | |_  | |   | |
    |_|_) |_|_/ |_|_/     _)_) |_| | |_|__ |_|__ |_|__

SparkContext available as sc, HiveContext available as sqlContext.  
BDD Context available as bc.

>>>

From the BDD Shell you can interact with BDD, for example to list out the datasets currently defined in the Catalog:

>>> bc.datasets().count  
17  
>>> for ds in bc.datasets():
...     print ds
...

media\_demo\_customer     edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        Hive    default.media_demo_customer

movie_genre     default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        Hive    default.movie_genre

media\_demo\_customer     default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        Hive    default.media_demo_customer  

Whilst BDD Shell is command-line based, there's also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive "Notebook". This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a 'Notebook' enables you to modify and re-run commands, and then once correct, retain them as a fully functioning script for future use.

To launch it, run:

  
cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell  
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It's important that you run this from the bdd-shell folder, otherwise the BDD shell won't initialise properly
  • By default Jupyter uses 8888, which is already in use on BigDataLite by Hue, so use a different one by specifying --port
  • Jupyter by default only listens locally, so you need to either be using BigDataLite desktop to run Firefox, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and you should see the default Jupyter screen with a list of files:

Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

In the next article, we'll see how to use Jupyter Notebooks with Big Data Discovery, and get an idea of just how powerful the combination can be.

Categories: BI & Warehousing

Bitmap Counts

Jonathan Lewis - Mon, 2016-06-13 06:40

A question came up on the Oracle-L list server a few days ago about a query whose plan showed several bitmap operations. The problem was that the A-Rows column reported by a call to dbms_xplan.display_cursor() was showing numbers that semed to be far too small. In fact the query was producing a parallel execution plan, so the “actuals” for the parallel server operations were reporting zeros because the OP had used the “allstats last” formatting option rather than just “allstats” – but the numbers were still far too small even after this error had been corrected.

This was a detail I’d not noticed before but there was an obvious(footnote 1) guess to explain why this apparent anomaly had appeared, viz: A-Rows counts rows in the rowsource for a table, index entries for a B-tree rowsource, and strings of bits when producing bitmaps. But even the most obvious guess should be checked so here’s some code that will  (at least) corroborate the hypothesis:


rem     Script:         bitmap_counts.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum as number(8,0)                             id,
        mod(rownum - 1,2) as number(8,0)                  n2,
        mod(rownum - 1,100) as number(8,0)                n100,
        lpad(rownum,10,'0') as varchar2(10)               v1,
        lpad('x',100,'x') as varchar2(100)                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
; 
create bitmap index t1_b2 on t1(n2) nologging;
create bitmap index t1_b100 on t1(n100) nologging;

begin 
        dbms_stats.gather_table_stats(
                ownname    => user,
                tabname    =>'T1',
                method_opt => 'for all columns size 1'
        );
end;
/

I’ve created a table that includes two columns that I’ve indexed with bitmap indexes. Because of the size of the table and the pattern of the data each distinct value for the two columns will require multiple bitmap index entries in the bitmap index.

The multiple bitmap chunks are important but before I comment further, here’s what the program does after creating the data:


select  index_name, distinct_keys, num_rows, clustering_factor
from    user_indexes
where   table_name = 'T1'
order by
        index_name
;

alter session set statistics_level = all;
set serveroutput off

select
        count(*)
from    t1
where   n2 = 0
and     n100 between 20 and 29
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

The first query reports some index stats – to confirm my comment about multiple index entries per key – the second query is the one that’s going to give me a useful plan. Let’s just check the results of the two queries first – the index stats are the only ones needing any comment:


INDEX_NAME           DISTINCT_KEYS   NUM_ROWS CLUSTERING_FACTOR
-------------------- ------------- ---------- -----------------
T1_B100                        100        800               800
T1_B2                            2         94                94


  COUNT(*)
----------
     50000

Index t1_b100 reports 800 index entries – the bitmap chunk for each value had to be split into 8 rowid ranges; we’re interested in 10 key values from this index.

Index t1_b2 shows 94 index entries – the bitmap for each value had to be split into 47 rowid ranges: we’re interested in one key value from this index.

And this is what the plan shows:


-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |      1 |        |      1 |00:00:00.01 |      69 |       |       |          |
|   1 |  SORT AGGREGATE              |         |      1 |      1 |      1 |00:00:00.01 |      69 |       |       |          |
|   2 |   BITMAP CONVERSION COUNT    |         |      1 |  55455 |      2 |00:00:00.01 |      69 |       |       |          |
|   3 |    BITMAP AND                |         |      1 |        |      2 |00:00:00.01 |      69 |       |       |          |
|*  4 |     BITMAP INDEX SINGLE VALUE| T1_B2   |      1 |        |     47 |00:00:00.01 |      25 |       |       |          |
|   5 |     BITMAP MERGE             |         |      1 |        |      2 |00:00:00.01 |      44 |  1024K|   512K|  291K (0)|
|*  6 |      BITMAP INDEX RANGE SCAN | T1_B100 |      1 |        |     80 |00:00:00.01 |      44 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("N2"=0)
   6 - access("N100">=20 AND "N100"<=29)


I can’t explain why the in-memory manipulation of the the bitstrings apparently produces two bitstrings at operations 3 and 5, but given the index stats we can understand that the 47 “rows” reported for operation 4 allow for one of the two key values in the index and the 80 “rows” reported for operation 6 allows for 10 of the key values in the index.   Q.E.D.

Bonus commentary

If you want to see table row counts (or their equivalent) appearing in a bitmap plan you’ll need to run a query that does a “bitmap conversion to rowids”, e.g.:


select
        count(n100)
from    t1
where   n2 = 0
and     n100 between 20 and 29
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |      1 |        |      1 |00:00:05.04 |      67 |       |       |          |
|   1 |  SORT AGGREGATE                |                  |      1 |      1 |      1 |00:00:05.04 |      67 |       |       |          |
|*  2 |   VIEW                         | index$_join$_001 |      1 |  55455 |  50000 |00:00:04.94 |      67 |       |       |          |
|*  3 |    HASH JOIN                   |                  |      1 |        |  50000 |00:00:04.75 |      67 |    25M|  4150K|   27M (0)|
|   4 |     BITMAP CONVERSION TO ROWIDS|                  |      1 |  55455 |    500K|00:00:00.94 |      25 |       |       |          |
|*  5 |      BITMAP INDEX SINGLE VALUE | T1_B2            |      1 |        |     47 |00:00:00.01 |      25 |       |       |          |
|   6 |     BITMAP CONVERSION TO ROWIDS|                  |      1 |  55455 |    100K|00:00:00.19 |      42 |       |       |          |
|*  7 |      BITMAP INDEX RANGE SCAN   | T1_B100          |      1 |        |     80 |00:00:00.01 |      42 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N100"<=29 AND "N2"=0 AND "N100">=20))
   3 - access(ROWID=ROWID)
   5 - access("N2"=0)
   7 - access("N100">=20 AND "N100"<=29)


Note how 80 “rows” at operation 7 turn into 100,000 rows at operation 6, and 47 “rows” at opration 5 turn into 500,000 rows at operation 4.

I’ve tested 11.2.0.4 and 12.1.0.2 with this code and they both behave the same way. Interestingly the final query (count with index hash join) took about 0.12 seconds to run with rowsource execution statistics disabled, but roughly 5 seconds with statistics enabled – and the extra time was all in the hash join.

Footnote 1:

“Obvious”: provided you’ve been working with the relevent bits of the Oracle software for several years(footnote 2) or have been reading the right books; even then something that’s “obvious” isn’t necessarily correct. ‘Oh, obvious,’ said Granny [Weatherwax]. ‘I’ll grant you it’s obvious. Trouble is, just because things are obvious doesn’t mean they’re true.’ – Wyrd Sisters: Terry Pratchett.

Footnote 2:

‘Oh, it’s largely intuitive, Archchancellor,’ said Ponder.  ‘Obviously you have to spend a lot of time learning it first, though.’ – Hogfather: Terry Pratchett

 


PeopleSoft Security Patches

The process of applying security patches starts with identifying which patches to apply. For PeopleSoft, security patches need to be considered for both the application and the major technical components. The application of security patches, referred to by Oracle as Critical Patch Updates (CPUs), for one component DO NOT apply security patches for the other components.

For example, PeopleTools CPU patches DO NOT include database CPUs – applying one will not automatically apply nor include the other. The same holds for WebLogic and Tuxedo CPU patches.

CPUs for PeopleTools releases are provided for up to 24 months after the next minor release is generally available. The following table will assist in analyzing your PeopleTools CPU levels certification status with other key PeopleSoft technical components:

PeopleTools

(PT)

PT Generally Available Date

PT CPU Delivered through

Database

Certifications

WebLogic

Certifications

Tuxedo

Certification

PT8.51

9/10/10

Jan 2014

11.2.0.4

10.3.6.0

10.3.0.0

PT8.52

10/28/11

Jan 2015

11.2.0.4

10.3.6.0

10.3.0.0

PT8.53

2/1/13

7/19/16

11.2.0.4

12.1.0.2

10.3.6.0

11.1.3.0

11.1.1.2

PT8.54

7/11/14

12/4/17

11.2.0.4

12.1.0.2

12.1.3.0

12.1.2.0

12.1.1.0

PT8.55

12/4/15

TBD

11.2.0.4

12.1.0.2

12.1.3.0

12.1.1.0

12.1.3.0

  • WebLogic 10.3.6.x is supported through December 2018
  • WebLogic 12.1.2.0 is supported through 6/2016
  • WebLogic 12.1.3.0 is supported through 12/2017 and will be the terminal release of 12.1.x
  • Tuxedo support dates: 10.3 12/2016, 12.1.3 in 2020 all 1
  • 1.x and 12.1.1 end in 2018

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

REFERENCES

PeopleSoft Database Security

PeopleSoft Security Quick Reference

 

Oracle PeopleSoft, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Data Recovery Advisor (11g)

Hemant K Chitale - Mon, 2016-06-13 03:13
Here's my YouTube video on using the 11g Data Recovery Advisor from the RMAN command line.

Other videos are in my YouTube Channel.

.
.
.

Categories: DBA Blogs

database option - tuning part 2

Pat Shuff - Mon, 2016-06-13 02:07
In our last entry we looked at the results of a sql tuning advisor. We used SQL Developer to execute our code and create a tuning advisory for the code that we executed. We could have gone through Enterprise Manager and done the same thing but done this historically rather on live data. In this blog entry we will analyze the same results using the Enterprise Manager Express that comes with the database as a service in the Oracle Cloud. To connect to this service we need to first open up the network ports to enable connection to port 1158. This is done through the database console or we could do this with ssh tunneling of port 1158 to our database target ip address.

Once we have access to port 1158 we can connect to the Enterprise Manager Express by going to the ip address of our server, in this instance 129.152.134.189 which we got from the database console, and connect to https://129.152.134.189:1158/em. Note that we might get a security exception since the certificate is self signed. We need to add an exception and connect to this service. When we are prompted for a login we connect as sys with sysdba rights. Note that we can not do this on Amazon RDS since we do not have sys access to the database in this service.

When we click on the Performance link at the top of the screen we can look at the overall performance of our system and drill down in to reports to get more information.

If we scroll down to the bottom we see a link called Advisor Central. We can follow this link and look at all of the tuning advisors that have been run and examine the results.

We can select a previously run tuning advisor and look at the recommendations that we did from SQL Developer. When we dive into the report we get a little different style report.

Note that the SQL profile recommends a 19.8% savings if we change the profile. If we click on the Recommendations and expand the information as a table rather than a graph we see that the pk_dept reference takes up a good amount of time and if we could get rid of it since it is not referenced it will speed up the select statements. If we click on the compare explain plans we can see how much of a speed up we will get if we implement the new plan. What I don't see from this is the recommendation to drop the dept d reference that we got from SQL Developer.

Note that the recommendation does state "Consider removing the disconnected table or view from this statement or add a join condition which refers to it" but does not specifically recommend removing dept d from the select statement as is done in SQL Developer.

If we wanted to expand upon use of the tuning advisor we could follow along the Oracle by Example Tuning Tutorial and look at how to initiate tuning advisories through Enterprise Manager. The first thing done in this tutorial is to initiate nightly tuning tasks by going into the server and enabling the Automated Maintenance Tasks. You first click the Configure button then click Configure the Automatic SQL Tuning button. After you change the Automatic Configuration of SQL Profiles to yes and click Apply you have a checkbox window to select the dates to run the tuning tasks.

Once we have this defined we can execute the following code as sys

set echo on

drop user ast cascade;
create user ast identified by ast;
grant dba to ast;
alter system flush shared_pool;
--
-- Turn off AUTOTASK
--
alter system set "_enable_automatic_maintenance"=0;
--
-- Clear out old executions of auto-sqltune
--
exec dbms_sqltune.reset_tuning_task('SYS_AUTO_SQL_TUNING_TASK');
--
-- Drop any profiles on AST queries
--
declare
  cursor prof_names is
    select name from dba_sql_profiles where sql_text like '%AST%';
begin
  for prof_rec in prof_names loop
    dbms_sqltune.drop_sql_profile(prof_rec.name);
  end loop;
end;
/
This creates a user ast. I recommend changing the password to something more complex.

We can then run a series of malformed select statements to generate some synthetic load to report upon and correct. Note that we do this as the ast user and not sys.

set echo off

select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;

Once we have the workload created we can kick off the sql tuning advisor with some different code.

set echo on

exec dbms_workload_repository.create_snapshot;

variable window varchar2(20);

begin
 select upper(to_char(sysdate,'fmday'))||'_WINDOW' into :window from dual;
end;
/

print window;

--
-- Open the corresponding maintenance window, but with other clients disabled
--

alter system set "_enable_automatic_maintenance"=1
/

exec dbms_auto_task_admin.disable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.disable( -
  'auto space advisor', null, :window);

exec dbms_scheduler.open_window(:window, null, true);

--
-- Close the maintenance window when sqltune is done
--

exec dbms_lock.sleep(60);

declare
  running number;
begin

  loop
    select count(*)
    into   running
    from   dba_advisor_executions
    where  task_name = 'SYS_AUTO_SQL_TUNING_TASK' and
           status = 'EXECUTING';

    if (running = 0) then
      exit;
    end if;

    dbms_lock.sleep(60);
  end loop;

  dbms_scheduler.close_window(:window);

end;
/

alter system set "_enable_automatic_maintenance"=0
/

--
-- Re-enable the other guys so they look like they are enabled in EM.
-- Still they will be disabled because we have set the underscore.
--

exec dbms_auto_task_admin.enable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.enable( -
  'auto space advisor', null, :window);

Note that this executes with some errors but still generates a good sql tuning advisor report. If we look back at the Advisor Central we can dive into the report and see what happened.

We can get an 8 second speedup by reformatting the sql select statements. This might or might not be worthy of tuning based on how many times we execute the code.

In summary, we have alternate ways of looking at sql tuning as well as a say of looking at historic data. We turned on automatic tuning reports which does consume more resources but if we have extra cpu cycles we can benefit from the reports. The Enterprise Manager Express that comes with database as a service is a very powerful tool. It is not a centralized utility like a centralized Enterprise Manager but can be used to automate and record reports for a single database. This service is only installed with the platform as a service and must be manually added and configured if you install your own database manually on top of infrastructure as a service. Having this common management interface is a huge benefit to DBAs who are asked to manage and maintain instances in the cloud. The Enterprise Manager used in the cloud is the exact same version that is used for an on-premise system. If you choose to install and configure a central Enterprise Manager server you can attach to instances in your data center as well as instances in the cloud. The only requirement is that you have file level access and sys/root access to install the agent.

Pages

Subscribe to Oracle FAQ aggregator