Skip navigation.

Feed aggregator

Maker Faire 2016

Oracle AppsLab - 6 hours 44 min ago

Another year, another amazing at the Maker Faire.

I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.

Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.

Robot battle demo for kids

Robot battle demo for kids

Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.

While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.

Jeremy talks on Maker Pro stage

Jeremy talks on Maker Pro stage

Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.

Flight of the bumble drone

Flight of the bumble drone

Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.

Lotus boat

Lotus boat

Blueberry muffin

Blueberry muffin

Flying dragon blowing fire

Flying dragon blowing fire

Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.

Electrifying experience

Electrifying experience

Possibly Related Posts:

CBO++

Jonathan Lewis - 8 hours 35 min ago

While browsing the web recently for articles on the HyperLogLog algorithm that Oracle uses for some of its approximate functions, I came upon a blog post written in Jan 2014 with the title Use Subqueries to Count Distinct 50X Faster. There are various ways that subqueries can be used to rewrite queries for improved performance, but when the title caught my eye I couldn’t think of a way in which they could improve “count distinct”.  It turned out that the word “subquery” was being used (quite correctly) in the sense of “inline view” while my mind had immediately turned to subqueries in the select list or where clause.

The article started by pointing out that if you have a query that does a join then aggregates the result you might be able to improve performance by finding a way of rewriting the query to aggregate before doing the join. (See this note from 2008). The article then went one step further to optimise a “count distinct” by wrapping a “select count” around a “select distinct” inline view as follows:

Original
--------
  select
    dashboard_id,
    count(distinct user_id) as ct
  from time_on_site_logs 
  group by dashboard_id

Rewrite
-------
select 
    inline.dashboard_id, 
    count(1) as ct
  from (
    select distinct dashboard_id, user_id
    from time_on_site_logs
  ) as inline
  group by inline.dashboard_id

(I’ve reproduced only the central part of the query being examined and I’ve changed the name of the inline view to eliminate the potential visual confusion due to the word “distinct” appearing in its name in the original).

The article was written using the Postgres SQL with the comment that the technique was universal; and this brings me to the point of the post. The technique can be applied to Oracle’s dialect of SQL. Both ideas are good ideas whose effectiveness depends on the data patterns, data volume, and (potentially) indexing; but you may not need to rewrite the code because the optimizer is programmed to know that the ideas are good and it can transform your query to the appropriate form internally. The “place group by” transformation appeared in 11.1.0.6 in 2007, and the “transform distinct aggregation” appeared in 11.2.0.1 in 2009.

Here’s a litte demo of Oracle handling a variation of the query I’ve shown above:


rem     Script: transform_distinct_agg.sql
rem     Dated:  May 2016
rem     Author: J.P.Lewis

create table t1 nologging 
as 
select  * 
from    all_objects 
where   rownum <= 60000
;
execute dbms_stats.gather_table_stats(user,'t1', method_opt=>'for all columns size 1')

alter session set statistics_level = all;

select owner, count(distinct object_type) from t1 group by owner;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

prompt  ===============
prompt  Rewritten query
prompt  ===============

select  owner, count(1)
from    (
         select distinct owner, object_type
         from   t1
        ) distinct_types
group by
        owner
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

Here are the two execution plans, pulled from memory – with the outline and some other peripheral lines deleted:


-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |           |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |           |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  728K (0)|
|   2 |   VIEW               | VM_NWVW_1 |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH GROUP BY     |           |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 2497K (0)|
|   4 |     TABLE ACCESS FULL| T1        |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------

===============
Rewritten query
===============

------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |      |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  735K (0)|
|   2 |   VIEW               |      |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH UNIQUE       |      |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 1345K (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
------------------------------------------------------------------------------------------------------------------

Apart from the change from “HASH UNIQUE” to “HASH GROUP BY” the two plans are the same, using the same resources – the UNIQUE being a special case of the algorithm for the GROUP BY. Here (with some cosmetic editing) is the SQL of the “unparsed query” taken from the 10053 (CBO) trace file – notice how similar it is to the text suggested by the original article, in particular the inline view to get the distinct list of owner and object_type (using a group by with no aggregated columns, rather than a distinct):

SELECT 
        VM_NWVW_1.$vm_col_2 OWNER,
        COUNT(VM_NWVW_1.$vm_col_1) COUNT(DISTINCTOBJECT_TYPE)
FROM    (
                SELECT
                        T1.OBJECT_TYPE $vm_col_1,
                        T1.OWNER $vm_col_2
                FROM    TEST_USER.T1 T1
                GROUP BY 
                        T1.OWNER,T1.OBJECT_TYPE
        ) VM_NWVW_1
GROUP BY
        VM_NWVW_1.$vm_col_2
;

The Oracle optimizer is pretty good at finding efficient transformations for the query you wrote so, rather than rewriting a query (with the option for making a mistake as you do so), you may only need to add a couple of hints to generate a suitable SQL Plan Baseline that you can attach to the original query.

Footnote:

Sometimes the optimizer will decide not to transform when it should, or decide to transform when it shouldn’t, so it’s nice to know that there are hints to block transformations – here’s the effect of adding /*+ qb_name(main) no_transform_distinct_agg(main) */ to my query:


----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      5 |00:00:00.25 |     865 |       |       |          |
|   1 |  SORT GROUP BY     |      |      1 |      5 |      5 |00:00:00.25 |     865 |  4096 |  4096 | 4096  (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
----------------------------------------------------------------------------------------------------------------

The interesting thing to note here is that even though the query took a little longer to complete the amount of memory allocated to run the query in memory was only 4K compared to the 2M needed by the transformed query (In this example both workareas would have been in existence at the same time – that won’t be true of every query using multiple workareas.) This isn’t significant in this trivial case, but it demonstrates the point that sometimes there is no one best path – you can choose the path that protects the resource that’s under most pressure.


Changing the page title in Fluid at run-time

Javier Delgado - 16 hours 50 min ago
One of our customers asked us to implement nested landing pages, in which some tiles would open a second landing page and eventually a third one and so on. Let me illustrate the use case with some screenshots (I apologise as they are in Spanish, but it should be useful anyway). This would be how the main landing page would look like:


By clicking the "Formación y Desarrollo" tile, a new landing page will be displayed:


And eventually, you can click a tile in this landing page which opens a third one. Let's pick "Formación Acceso al Puesto" for instance:

Unfortunately, the customer could not take advantage of the Master - Detail Framework as they are on PeopleTools 8.54 and this functionality is only available in release 8.55 (which is more complex to upgrade to as Crystal Reports are no longer supported).
So, we decided to build a custom component for our nested landing pages. The component would be called again and again with different URL parameters, in such a way the history could be maintained and the user could go to the previous step instead of going back all the way through the top landing page.
As we were reusing the same component, we needed to adjust the page title in PeopleCode. There involved not only updating the page title itself, but also making sure the back button showed the title of the previous page.

Changing the Navigation Bar TitleBy default, the title is set to the component label in the menu. Luckily, there is a good number of examples in the standard functionality on which the title is set at run time, so this one was not particularly difficult to implement. The code that makes the trick is the following:
PTLAYOUT.PAGETITLE_GROUPBOX.Label = &title;
This code needs to be placed in the page activate event. If placed anywhere else, the standard PT_HEADERPAGE Activate code will override the title back to the default one.
Another option is to create a custom header page and add it to the component, but at least from the back button functionality point of view, it did not seem an easy solution.Changing the Back Button TitleThis one was trickier. PeopleSoft maintains a navigation history stack in Javascript which is populated with the default page title at load time using the following Javascript call:
AddToHistory('Cns Navgrppage', '', '', 'CNS_NAVGRPPAGE', 1, 27);
So, in order to keep the right page title in the navigation stack, we needed to update it. Fortunately, there is another Javascript function provided by PeopleSoft called UpdateHistory. The PeopleCode function AddOnLoadScript is particularly helpful when trying to run Javascript functions after the page is loaded. This is the way we implemented the call also in the page activate PeopleCode event:
AddOnLoadScript("UpdateHistory('" | &title | "', undefined, undefined, undefined, 1);");








database alternatives

Pat Shuff - Tue, 2016-05-24 09:05
One of the key questions that I get asked on a regular basis is to justify the cost of some product. Why not use freeware? Why not put things together and use free stuff? When I worked at Texas A&M and Rice University we first looked at public domain software. We heavily used the Apache web server, Tomcat, MySQL, Postgress, Linux, and BSD. These applications worked up to a point. Yes, you can spin up one Apache web server on one server. Yes, you can have one Apache web server listen on multiple IP addresses and host multiple web servers. The issue typically is not how many web servers can you handle but how many clients can you answer. Easily 90% of the web servers could handle the load that it saw on a regular basis. We spent 80% of our time on the 10% that could not handle the load. Not all of the web servers could handle the functionality. For example, a student registration system needs to keep a shopping cart of classes selected and you need to level up to an Apache Tomcat server to persistently keep this data and database connections live. If you use a web server you need to store all transactions in the database, all of the classes selected, and all of the fees associated with the class. Every interaction with the web server causes multiple connections with the database server. Doing this drives the number of processors needed by the database thus driving up the cost of the hardware and software license.

If we use an application server that can handle caching of data, we can keep a list of available classes on the application server and not only have to go back to the database server for transactions. When a student selects a class, it takes it out of inventory and puts it in their class schedule for the next year. The same is true for on-line shopping, purchasing tickets to a play or airline, drafting for a fantasy football team. Years ago ESPN ran a March Madness contest on-line. They presented your selections with an Apache web server and every team selection required an interaction with their database on the back end. The system operated miserably and it took hours to select all rounds to fill out your bracket. They updated the server with Javascript and a Tomcat server and allowed you to fill out all of round one in your browser. Once you finished the first round you submitted your selections and were presented with a round two based on your first round selections. They later put this on WebLogic and put all of the round selections in Java code on the WebLogic server. The single interaction with the database became submission of your complete bracket. They went from thousands of interactions with a database to a single interaction per submission.

We can have similar architecture discussions at the database layer as well. If I am looking at a simple table lookup, why pay for a robust database like Oracle 12c? Why not use something like Azure Table Storage Services and do a simple select statement from a file store. Why not put this in a free version of Oracle in Apex on the web and define a REST api to pull the data based on a simple or potentially more complex select statement. Again, 90% of the problems can be solved with simple solutions. Simple table lookups like translating a simple part name to a price can be done with Excel, MySQL, APEX, JSON processing, or REST apis. The difficulty comes up with the remaining 10%. How do I correlate multiple tables together to figure out the price of an item based on cost of inventory, cost of shipping, electrical costs, compensation costs for contractors and sales people, and other factors that determine profitability and pricing. How do I do a shortest routing algorithm for a trucking system based on traffic, customer orders, inventory in a warehouse, the size of a truck, and the salary of the driver and loading dock personnel. For things like this you need a more complex database that can handle multiple table joins, spatial data, and pulling in road conditions and traffic patterns from external sources. Products like IBM DB2, Oracle Database, and Microsoft SQL Server can address some of these issues.

We also need to look at recovery and restoration time. When a Postgress server crashes, how long does it take to recover the database and get it back online? Can I fail over to a secondary parallel server because downtime is lost revenue or lost sales. If you go to HomeDepot to order plumbing parts and their site goes down, how long does it take to go to the Ace or Lowes web site and order the same part and have it delivered by the same delivery truck to your home or office? Keeping inventory, order entry, and web services up becomes more than just answering a query. It becomes a mission critical service that can not go down for more than a few seconds. Services like Data Guard, Golden Gate, and Real Application Clustering are required to keep services up and active. MySQL, MongoDB, Amazon Aurora, and other new entry level database technologies can handle simple requests but take minutes/hours to recover information for a database. Failing over through storage to another site is typically not an answer in this case. It takes minutes/hours to recover and restart a moderate database of 20 TB or larger. First the data replication needs to finish then the database needs to be booted at a secondary site and it needs to maintain consistency in the data as it comes back up. The application server then needs to connect to the new service and recommit requests that came in during and since the system failure. As this is happening, customers are opening a new browser tab and going to your competition to find the same part on another site.

In summary, it takes more than just getting a bigger and faster application server or database. Moving the services to the cloud isn't necessarily the answer. You need to make sure that you move the two components together the majority of the time. Look at your application and ask where do you spend more of your time? It is tuning sql statements? Is it writing new queries to answer business questions? Is it optimizing your disk layout to get tables to the database faster? Take a step back and ask why is the database pounding the disk so hard. Can I cache this data in the database by adding a little more memory to the disk controller or database server? Can I cache the data at the application server by adding more memory there and keep from asking the database for the same information over and over again? In the next few days we are going to look at database options and database monitoring. We are going to look at some of these tools and refer back to the bigger picture. Yes, we can tune the storage to deliver all of the bits at the highest rate possible. Our question will not be how to do this but should we be doing this. Would something like an Exadata or an in-memory option allow us to transfer less data across the storage network and get us answers faster? Would adding memory somewhere allow us to buffer more data and reduce the database requests which reduces the amount of data needed from the disk.

Mobile Apps with PL/SQL

Gerger Consulting - Tue, 2016-05-24 08:25
We are happy to announce that the first mobil applications built with Formspider are now in production. You can download them from the Apple App Store using this link, or from the Google Play Store using this link.

    

Formspider enables PL/SQL developers to build cross platform mobile apps using only PL/SQL as the programming language.
Read the story of our Dutch client to learn more about Formspider Mobile and the possibilities it opens for Oracle PL/SQL Developers.
Categories: Development

Oracle’s Fashion Technology Makers: Soldering, Sewing, and Storytelling

Oracle AppsLab - Tue, 2016-05-24 06:44

Photo 18-05-2016, 11 14 39

Many hands make light (emitting diodes) work. Oracle Applications User Experience (OAUX) gets down to designing fashion technology (#fashtech) solutions in a fun maker event with a serious research and learning intent. OAUX Senior Director and resident part-time fashion blogger, Ultan “Gucci Translated” O’Broin (@ultan), reports from the Redwood City runway.

Fashion and Technology: What’s New?

Wearable technology is not new. Elizabeth I of England was a regal early adopter. In wearing an “armlet” given to her by Robert Dudley, First Earl of Leicester in 1571, the Tudor Queen set in motion that fusion of wearable technology and style that remains evident in the Fitbits and Apple Watches of today.

Elizabeth I’s device was certainly fly, described as “in the closing thearof a clocke, and in the forepart of the same a faire lozengie djamond without a foyle, hanging thearat a rounde juell fully garnished with dyamondes and a perle pendaunt.”

Regardless of the time we live in, for wearable tech to be successful it has to look good. It’s got to appeal to our sense of fashion. Technologists remain cognizant of involving clothing experts in production and branding decisions. For example, at Google I/O 2016, Google and Levi’s announced an interactive jacket based on the Google Jacquard technology that makes fabric interactive, applied to a Levi’s commuter jacket design.

Fashion Technology Maker Event: The Summer Collection

Misha Vaughan’s (@mishavaughan) OAUX Communications and Outreach team joined forces with Jake Kuramoto’s (@jkuramot) AppsLab (@theappslab) Emerging Tech folks recently in a joint maker event in Oracle HQ to design and
build wearable tech solutions that brought the world of fashion and technology (#fashtech) together.

Julian Orr (@orr_ux) and intern David Xie flash off those word-face smart watches

Julian Orr (@orr_ux) and intern David Xie flash off those word-face smart watches

Tawny Le (@ihearttanie) creates an interactive glove solution for aspiring keyboardists of all sorts.

Tawny Le (@ihearttanie) creates an interactive glove solution for aspiring keyboardists of all sorts.

The event include the creation of interactive light skirts, smart watch word faces, touch-sensitive drum gloves, sound reactive jewelry, and more from the Adafruit wearables collection.

Sarahi Mireles (@sarahimireles) and Ana Tomescu (@annatomescu) work on that fiber-optic pixie skirt.

Sarahi Mireles (@sarahimireles) and Ana Tomescu (@annatomescu) work on that fiber-optic pixie skirt.

The occasion was a hive of activity, with sewing machines, soldering irons, hot-glue guns, Arduino technology, fiber-optic cables, LEDs, 3D printers, and the rest, all in evidence during the production process.

Photo 18-05-2016, 13 14 10 Photo 18-05-2016, 13 03 32

Fashtech events like this also offer opportunities of discovery, as the team found out how interactive synth drum gloves can not only create music, but be used as input devices to write code too. Why limit yourself to one kind of keyboard?

Discovery, design, development: All part of the maker’s day. (L-r) Noel Portugal(@noelportugal), Raymond Xie (@YuhuaXie), and Lancy Silveira ( @LancyS) get ready for the big reveal!

Wearable Tech in the Enterprise: Wi-Fi and Hi-Heels

What does this all this fashioning of solutions mean for the enterprise? Wearable technology is part of the OAUX Glance, Scan, Commit design philosophy, key to that Mobility strategy reflecting our cloud-driven world of work. Smart watches are as much part of the continuum of devices we use interchangeably throughout the day as smart phones, tablets, or laptops are, for example. To coin a phrase from OAUX Group Vice President Jeremy Ashley (@jrwashley) at the recent Maker Faire event, in choosing what best works for us, be it clothing or technology: one size does not fit all.

Maker events such as ours fuel creativity and innovation in the enterprise. They inspire the creation of human solutions using technology, ones that represent a more human way of working.

A distinction between what tech we use and what we wear in work and at home is no longer convenient. We’ve moved from BYOD to WYOD. Unless that wearable tech, a deeply personal device and style statement all in one, reflects our tastes and sense of fashion we won’t use it: unless we’re forced to. The #fashtech design heuristic is: make it beautiful or make it invisible. So, let’s avoid wearables becoming swearables and style that tech, darling!Possibly Related Posts:

On Demand Webcast: Driving Improved Sales Productivity and Customer Engagement with Cloud

WebCenter Team - Tue, 2016-05-24 04:35
The digital age has radically changed sales and customer service, from the way sales reps conduct work to how customers interact with brands. Being able to access information anytime, anywhere is an imperative for sales reps and customers alike.
So, how can you meet these rising expectations to the delight of your account reps as you help boost the bottom line without decimating your IT budget on costly CRM purchases?
This brief webcast discusses how cloud content, tools and processes can improve sales performance, shorten sales cycles, and improve customer engagement. We reveal how cloud tools can transform sales productivity and customer engagement in your enterprise, and ultimately drive revenue. View today!

Recursion with recursive WITH

OraFAQ Articles - Tue, 2016-05-24 03:03

I recently had the opportunity to talk with Tom Kyte (!), and in the course of our conversation, he really made me face up to the fact that the SQL syntax I use every day is frozen in time: I’m not making much use of the analytic functions and other syntax that Oracle has introduced since 8i.

read more

Using DBMS_STREAMS_ADM To Cleanup GoldenGate

Michael Dinh - Mon, 2016-05-23 23:33

This is really messed up. I chose GoldenGate because I did not want to mess around with streams.

When using Integrated Capture or Delivery, then knowing streams is a prerequisites.

Apologies as the format is not pretty.

The QUEUE table was indeed missing and this is what I get for monkeying around.

To resolve the issue –  exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

GGSCI (arrow.localdomain as ggs_admin@hawk) 3> unREGISTER EXTRACT e_hawk DATABASE

2016-05-23 19:16:32  ERROR   OGG-08221  Cannot register or unregister EXTRACT E_HAWK because of the following SQL error: 
OCI Error ORA-24010: QUEUE "GGS_ADMIN"."OGG$Q_E_HAWK" does not exist
ORA-06512: at "SYS.DBMS_APPLY_ADM_INTERNAL", line 468
ORA-06512: at "SYS.DBMS_APPLY_ADM", line 724
ORA-06512: at line 1 (status = 24010).

GGSCI (arrow.localdomain as ggs_admin@hawk) 4> exit


ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

CAPTURE_NAME                   QUEUE_NAME                     QUEUE_OWNER
------------------------------ ------------------------------ ------------------------------
RULE_SET_NAME                  RULE_SET_OWNER                 CAPTURE_USER
------------------------------ ------------------------------ ------------------------------
 START_SCN STATUS   CAPTURED_SCN APPLIED_SCN USE  FIRST_SCN
---------- -------- ------------ ----------- --- ----------
SOURCE_DATABASE
----------------------------------------------------------------------------------------------------
SOURCE_DBID SOURCE_RESETLOGS_SCN SOURCE_RESETLOGS_TIME LOGMINER_ID NEGATIVE_RULE_SET_NAME
----------- -------------------- --------------------- ----------- ------------------------------
NEGATIVE_RULE_SET_OWNER        MAX_CHECKPOINT_SCN REQUIRED_CHECKPOINT_SCN LOGFILE_ STATUS_CH
------------------------------ ------------------ ----------------------- -------- ---------
ERROR_NUMBER
------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
VERSION                                                          CAPTURE_TY LAST_ENQUEUED_SCN
---------------------------------------------------------------- ---------- -----------------
CHECKPOINT_RETENTION_TIME
-------------------------
START_TIME                                                                  PURPOSE
--------------------------------------------------------------------------- -------------------
CLIENT_NAME
----------------------------------------------------------------------------------------------------
CLIENT_S OLDEST_SCN FILTERED_SCN
-------- ---------- ------------
OGG$CAP_E_HAWK                 OGG$Q_E_HAWK                   GGS_ADMIN
                                                              GGS_ADMIN
    256229 DISABLED       346591      346586 NO      256229
HAWK
 3171223736                    1             912525304           3
                                           346420                  346586 IMPLICIT 23-MAY-16


11.2.0.4.0                                                       LOCAL
                        0
22-MAY-16 04.21.31.000000 PM                                                GoldenGate Capture
E_HAWK
DISABLED     346586       255600


ARROW:(SYS@hawk):PRIMARY> exec DBMS_CAPTURE_ADM.STOP_CAPTURE('OGG$CAP_E_HAWK');

PL/SQL procedure successfully completed.

ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

CAPTURE_NAME                   QUEUE_NAME                     QUEUE_OWNER
------------------------------ ------------------------------ ------------------------------
RULE_SET_NAME                  RULE_SET_OWNER                 CAPTURE_USER
------------------------------ ------------------------------ ------------------------------
 START_SCN STATUS   CAPTURED_SCN APPLIED_SCN USE  FIRST_SCN
---------- -------- ------------ ----------- --- ----------
SOURCE_DATABASE
----------------------------------------------------------------------------------------------------
SOURCE_DBID SOURCE_RESETLOGS_SCN SOURCE_RESETLOGS_TIME LOGMINER_ID NEGATIVE_RULE_SET_NAME
----------- -------------------- --------------------- ----------- ------------------------------
NEGATIVE_RULE_SET_OWNER        MAX_CHECKPOINT_SCN REQUIRED_CHECKPOINT_SCN LOGFILE_ STATUS_CH
------------------------------ ------------------ ----------------------- -------- ---------
ERROR_NUMBER
------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
VERSION                                                          CAPTURE_TY LAST_ENQUEUED_SCN
---------------------------------------------------------------- ---------- -----------------
CHECKPOINT_RETENTION_TIME
-------------------------
START_TIME                                                                  PURPOSE
--------------------------------------------------------------------------- -------------------
CLIENT_NAME
----------------------------------------------------------------------------------------------------
CLIENT_S OLDEST_SCN FILTERED_SCN
-------- ---------- ------------
OGG$CAP_E_HAWK                 OGG$Q_E_HAWK                   GGS_ADMIN
                                                              GGS_ADMIN
    256229 DISABLED       346591      346586 NO      256229
HAWK
 3171223736                    1             912525304           3
                                           346420                  346586 IMPLICIT 23-MAY-16


11.2.0.4.0                                                       LOCAL
                        0
22-MAY-16 04.21.31.000000 PM                                                GoldenGate Capture
E_HAWK
DISABLED     346586       255600


ARROW:(SYS@hawk):PRIMARY>


ARROW:(SYS@hawk):PRIMARY> exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

PL/SQL procedure successfully completed.

ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

no rows selected

ARROW:(SYS@hawk):PRIMARY>

Previous LMS For Schools Moving to Canvas in US and Canada

Michael Feldstein - Mon, 2016-05-23 16:39

By Phil HillMore Posts (405)

During the most recent quarterly earnings call for Instructure, an analyst asked an interesting question (despite starting off from the Chris Farley Show format).

Corey Greendale (First Analysis Securities Corporation):  Awesome. A couple of other things on the, primarily on the Higher Ed space but I guess on education space, there’s a whole lot of couple questions about the competitive environment. When you’re and I don’t know if you will ever get into this level of granularity but when you got competitive wins against the Blackboard, are those predominantly from legacy ANGEL, are you getting those wins as much from Learn as well.

Josh Coates (CEO of Instructure):  A lot of them are from Learn. Most, you know I don’t have the stats right off the top of my head. A lot of the ANGEL and WebCT stuff is been mopped up in the previous years and so, what’s left the majority of what’s left is Learn and our win rate against Blackboard it continues to be incredibly high, not just domestically but internationally as well.

In fact, I think most of three out of the four international schools that we announced in this earnings where Blackboard Learn replacements, so yes Learn’s getting it.

The question gets to the issue of whether Canvas is just picking up higher education clients coming off of discontinued LMSs (Angel, WebCT, etc) or if they are picking up clients from ongoing platforms such as Blackboard Learn. Beyond the obvious interest of investors and other ed tech vendors, this issue in general affects higher education institutions going through a vendor selection – for the system in consideration, are there many other schools considering the same migration path?

Thanks to the work we’ve been doing with LISTedTECH and our new subscription service, we can answer this question in a little more detail. One of the charts we share shows higher education migrations over the past five years in the US and Canada.

5YearsLMSChanges-public

Looking at the bottom right of the chart, you can see that Canvas has picked up clients previously using WebCT, Learn, ANGEL, Moodle, LearningStudio, Sakai, Homegrown, and Brightspace (from D2L).

Canvas Wins

Josh could have answered that Canvas actually has picked up more clients formerly using Learn than those using ANGEL, but a small portion of Learn includes those using the discontinued Basic edition. Nevertheless, there are quite a few wins coming from systems that have not been discontinued, which I think was the main point of the question.

As you can see, there is interesting data on other systems as well. Some notes from this view:

  • Blackboard Learn new implementation have mostly come from the company’s own discontinued WebCT and ANGEL platforms, with some small contributions from LearningStudio and Moodle.

Learn Wins

  • D2L Brightspace has an impressive client retention rate, with very few clients migrating off of their system.

Brightspace Losses

What other trends do you see in this data?

The post Previous LMS For Schools Moving to Canvas in US and Canada appeared first on e-Literate.

HIUG Interact 2016 Agenda

Jim Marion - Mon, 2016-05-23 15:30

In a couple of weeks, I will be presenting the following sessions at the HIUG Interact 2016 conference in San Antonio

  • 16165 : PeopleSoft Fluid User Interface – Deep Dive: Grand Oaks D, Mon, Jun 13, 2016 (03:15 PM - 04:15 PM)
  • 16164 : PeopleTools Tips & Techniques: Grand Oaks D, Tue, Jun 14, 2016 (02:30 PM - 03:30 PM)
  • 16163 : Tech Clinic: Application Designer Grand Oaks D, Wed, Jun 15, 2016 (12:30 PM - 02:30 PM)

database management

Pat Shuff - Mon, 2016-05-23 13:52
Today we are going to look at managing an Oracle database. We are going to start with a 12c database that we created in the Oracle Public Cloud. We selected database as a service (as opposed to virtual image), monthly billing, 12c, and enterprise edition high performance edition. We accepted the defaults for the table size so that we can figure out how to extend the table size and selected no backups rather than starting RMAN for daily incrementals or cloud object storage for weekly full backups.

We basically have four options for managing a database. If we have a small number of databases we might look at using the sqlplus sysdba command line access and grind through administration. We also have a database monitor that is installed by default with the database cloud service. We can dive into this database through the monitor and look at log running queries, tablespace sizes, and generic utilization. We can also connect with sql developer and look at the new DBA interfaces that were added in the latest release in early 2016. The fourth and final way of administering is to look at commercial management tools like Oracle Enterprise Manager (OEM) or other tools that aggregate multiple systems and servers and give you exposure beyond just the database. These commercial tools allow you to look at they layer that you are most interested in. You can get a PeopleSoft Management Pack for OEM that allows you to look at purchase order flow, or payroll requests. You can get diagnostics and tuning packs for the application server and database that allows you to look at what part of the PeopleSoft implementation is taking the longest. Is it the network connection? It is a poorly tuned Java Virtual Machine that is memory thrashing? It is a sql statement that is waiting on a lock? Is it a storage spindle that is getting hammered from another application? Is it a run away process on your database server that is consuming all of the resources? All of these questions can be answered with a monitoring tool if you not only know how to use it but what is available for free and what you need to purchase to get the richer and more valuable information.

To get to the database monitor we go to the cloud services console (which changed over the weekend so it looks a little different), click on database, click on Service Console, and click on the database name.

If we click on the dbaas_monitor menu item in the hamburger menu system to the right of the service name it might fail to connect the first time. It will take the ip address of the database and try to open https://ip address/dbaas_monitor. We first need to open up port 443 to be able to communicate to this service.

To get to the network connection we need to go to the Compute Service Monitor, click on the Network tab, and change the proper port number for our server prs12cHP. If we hover over the labels on the left we see what ports we are looking for. We are specifically interested in the https protocol. If we click on the hamburger menu next to this line item we can Update the security list which pops up a new window.

To enable this protocol we enable the service and click the Update button. Once we do this we can retry the dbaas_monitor web page. We should expect a security exception the first time and need to add an exception. We login as dbaas_monitor and the password that we entered in the bottom left of the screen for the system passwords when we created the database.

At this point we can look at cpu utilization, table space usage, if the database is running, and all other monitoring capabilities. Below are the screen shots for the listener and the table sizes and storage by pluggable database.

We can look a little deeper at things like alerts, wait times, and real time sql monitoring. These are all available through command line but providing a service like this allows junior database administrators to look at stuff quickly and easily.

The biggest drawback to this system is that you get a short snapshot and not a long term historic archive of this data. If we use Enterprise Manager, which we will look at in a later blog, from a central site we collect the data in a local repository we can look back at months old data rather than live or data from the past few hours.

In summary, if we use platform as a service, we get tooling and reporting tools integrated into services rather than having to spin these up or look at everything from the command line as is done with infrastructure as a service. We get other features but we are diving into database monitoring this week. We briefly touched on database monitoring through what was historically called dbmonitor and is moving towards dbaas_monitor or a central enterprise manager pane of glass for database services in our data center and in the cloud. One of the key differentials from Oracle Database as a Service and Amazon RDS is database monitoring. We will look at database monitoring for Amazon RDS later this week and notice there are significant differences.

Analytics in Search

PeopleSoft Technology Blog - Mon, 2016-05-23 12:06

Content contributed by Balaji Pattabhiraman

In two recent posts (creating simplified analytics--user perspective and creating analytic home pages), we’ve described some of the capabilities of Simplified Analytics.  In this post, we’ll examine how analytics and charting can be used to enhance search. 

Searching is an important part of the PeopleSoft UI, and we are continually improving it.  For example, we included related actions in search pages, enabling users to act on data directly from the search results without even having to navigate to transactions.  We also included commercial search features like facets and filtering to focus result sets.  Following that, we provided search in the Fluid UI, making search easier to use across form factors, including phones and tablets.

Now as part of Fluid Component Search, we have enabled analytics which helps users visualize and understand their results better, so they can act on those results more effectively.  Let’s examine what that looks like and how to enable it.

Analytics on a Search Page

First, recall what a classic component search page looked like.



It’s useful, but not as powerful as it could be.  As of PeopleTools 8.54 and higher we have transformed the search experience for all Fluid Components. 

Here is an example of a Contracts search.  In this example, the user can look at the Gross Contract Amount by Supplier in addition to searching for the contract.  Notice how the search page is transformed in Fluid. The left panel provides filters to narrow the results or initiate a new search. Here we have some Administrator and Contract Style filters applied. You can select a row of detail to open the page for that contract detail after narrowing the search. In this case, Gross Amount by Supplier Name.  Applying a filter will update the analytics as well as updating the search result grid because the analytics reflect live data.



Analytic charts are available for this page, so the user switches on the chart slider in the upper right of the page.



Here is the same search result page with the analytic turned on.



The charts are interactive like any Fluid Pivot Grid.  This means that you can select any data point and drill down using related actions.  Here we view the details by another field for the selected supplier (For example view contract date vs amount for a supplier).  You can also narrow the results for the supplier from the chart data point by selecting the Detailed View option.   There are more options under the gear menu in the upper right.  This lets you change chart options, download data to spread sheets, etc.



Adding Analytics to a Search

Now let’s see how to build a Fluid component search to include Pivot Grids.  Open the component properties.  (You would have already enabled the Fluid Flag while building the fluid component.)  In this case we’ll set the Search Type to Standard.  The other option for Search Type is Master-Detail, which enables you to persist the search results on the left panel after selecting a search result and opening the transaction.  This allows easy navigation between the search results without navigating back to the search page.   (You would also have the search/add record filled for the component, which is similar to any classic component with search.)



Next, open the Pivot Grid wizard and create a new pivot grid model.

In step 1, give the pivot grid a title and, optionally, a description.  You can also set the Type and Owner here.

In step 2, choose the data source type.  Here we’ve set the type as Component.  Then choose your component.  Also, choose a tree name and access group you generally use with the product line.  In this case we’ve chosen the Purchase Order access group.  This will ensure a query can be created from the search record and the end users of the component can see the search results. (Note: The search record will be added as part of this tree and the permissions from the component are added to the tree).  When we click Apply, this creates a query with the same name as the component and the fields will be listed.  Now we choose the fields that we want the end-user to see. 

In the step 3, we set overriding labels for the fields and choose the column type--whether a field is only for display in the grid or whether it should be part of the chart axis or the value the chart plots.  In addition, all the key fields become prompts for the pivot grid model automatically.  We can set default values for these prompts.  Note that you can change the prompt fields and change the criteria by modifying the generated query.  In most cases, the key fields serving as prompts will suffice. You can also choose which prompts to show or hide to the end-user in this step.

In step 4, we specify axis information including which fields will be the x-axis, y-axis and the filters.  In addition, under the Fluid Mode Options, we can fill the list view Title and Summary fields to show if your component is to be used on small form factors like a phone. 

You can optionally preview the chart in the last step and save the pivot grid definition.  This completes the setup.
Now when a user navigates to the component, the search page will show up based on the configuration.   (Typically a user might navigate from a home page tile, registered using the tile wizard.)

Now you can open the Search page by selecting the Group and Special Contracts tile.  You can view it with or without the chart.


Because this uses the PeopleSoft Fluid User Interface, the search page renders nicely on smaller form factors like smart phones and tablets.  This requires no additional configuration.  Here is the same page on a smart phone.



7 Signs Your EPM is Lagging Behind Your Competition

Look Smarter Than You Are - Mon, 2016-05-23 12:06
Regardless of industry, regardless of size, regardless of duration, all companies have similar issues in their financial analysis, planning, and consolidation areas. From building budgets to financial reporting, how can CFOs, VPs of Finance, Directors of FP&A and Controllers tell if their FP&A teams are falling behind their competitors? Here are seven signs that your Enterprise Performance Management (EPM) environments are stuck in the last decade:
  1. Strategy is planned verbally or in spreadsheets. While the majority of strategic CFO’s agree that Finance should be looking forward and not backward, most strat planning is done in Excel or worse, out loud in various meetings. There is no modeling unless someone comes up with a bunch of linked spreadsheet formulas. Strategies are agreed to in conference rooms and conveyed at a high-level via email (or they aren’t communicated at all). Strategies are evaluated by whomever has the best anecdote: “well, the last time that happened, we did this…” The only thing worse than not having a solution for strategic planning is not doing strategic planning at all. Speaking of spreadsheets…
  2. Excel is the key enabling technology in your FP&A department. One sure way to tell if your EPM function is falling behind is to ask “what is the single most important tool your department uses when running reports? Performing analysis? Coming up with a strategic plan? Preparing the budget? Modeling business changes?” If the answer to four-out-of-five of those is “Microsoft Excel”, ask yourself if that was by design or if people just used Excel because they didn’t have a better system. Excel is a wonderful tool (I open it every morning and don’t close it until I leave), but it was meant to be a way to look at grids of data. It was not meant to store business logic and it was never meant to be a database. Force your FP&A group to do everything with Excel and expect to be waiting for every answer… and then praying everyone got their formulas right when you make business decisions based on those answers.
  3. There is only one version of the budget. No one really thinks that there’s only one way that the year will end up, but most companies insist on a single version of a budget (and not even a range, but a specific number). Not only are EPM Laggards (companies with EPM trailing behind their peer groups) not planning multiple scenarios, they’re insisting that the whole company come up with a single number and then stick to it no matter what external factors are at play. Ron Dimon refers to scenario plans as “ready at hand plans” waiting to be used once we see how our strategic initiatives are enacted. EPM Laggards not only don’t have additional plans ready, they insist on holding everyone in the organization accountable to one single number, outside world be damned.
  4. Budgets favor precision over timeliness. Your competition realizes that a forecast that’s 95% accurate delivered today is more helpful than a budget that was 98% accurate 6 months ago. Yet EPM Laggards spend months coming up with a budget that’s precise to the dollar and then updating it periodically at a high level. It’s amazing how often FP&A groups end up explaining away budget vs. actual discrepancies by saying “the budget was accurate at the start of the year, but then things happened.” Budgets should be reforecasted continuously whenever anything material changes. Think about it: if you had one mapping app that gave you an estimate of your arrival time to the 1/100th of a second at the time you departed and another mapping app that constantly refined your arrival time as you drove, which one would you choose?
  5. No one takes actions on the reports. Edward’s Rule of Reporting: every report should either lead to a better question or a physical action. If your department is producing a report that doesn’t lead someone to ask a bigger, better, bolder question and doesn’t lead someone to take a physical action, change the report. Or stop producing the report entirely. EPM Laggards spend an inordinate amount of time collecting data and generating reports that don’t lead to any change in behavior. EPM Leaders periodically stop and ask themselves “if I arrived today, is this what I would build?” Half the time, the answer is “no,” and the other half the time, the answer is “if I arrived today, I actually wouldn’t build this report at all.”
  6. Most time is spent looking backwards. Imagine you’re driving a car. Put your hands on the wheel and look around. Notice that most of your visual space is the front windshield which shows you what’s coming up ahead of you. Some of what you see is taken up by the dashboard so you can get a real-time idea of where you are right now. And if you glance up, there’s a small rear-view mirror that tells you what’s behind you. A combination of all three of these (windshield, dashboard, and rearview mirror) gives you some idea of when you should steer right or left, brake, or accelerate. In a perfect EPM world, your time would be divided the same way: most would be spent looking ahead (budgeting and forecasting), some time would be spent glancing down to determine where you are at the moment, and very little would be spent looking backwards since, let’s face it, the past is really difficult to change. In your car, you’d only look at the mirror if you were changing lanes or you were worried about being hit from behind, and business is similar yet most EPM Laggards drive their cars by looking backwards.
  7. Labor is devoted to collecting & reporting and not planning & analyzing. If you spend all of your time gathering data, reconciling data, and reporting on data, you’re answering the question “what happened?” Your competition is spending their time analyzing (“why did this happen?”) and then planning to take action (“what should I do next?”). There is a finite amount of time in the world and sadly, that holds true in our FP&A departments too. If your EPM system is focused on collecting, consolidating, & reporting and your competition has their EPM focused on analyzing, modeling, & planning, who do you think will win in the long run?


What You Can DoIf you look at those seven top signs you’re lagging in your EPM functions and wonder how to improve, the first step is to stop building anything new. While this seems counterintuitive, if you take a tactical approach to solving any one area, you’re going to put in place a single point solution that will need to be thrown away or redone as you get closer to your overall vision for EPM. So what’s step 1? Have an EPM vision. Ask yourself where you want your company to be in three years. What do you want out of consolidation, reporting, analysis, modeling, and planning and how will all of those functions be integrated?
You are not alone. I have seen hundreds of FP&A departments in my time struggle with having a vision for just one area let alone a long-range vision. Even when leadership has a vision, it quite often focuses on system improvements (we’re not sure what to do, so let’s throw technology at it!) rather than try to improve processes too. Thankfully, there is hope and as my good friends at G.I. Joe always say, knowing is half the battle.
More InformationWednesday, May 25, at 1PM Eastern, I’m holding a webcast to share lessons I’ve learned over the years on how to turn EPM Laggards into EPM Leaders. If you want help coming up with your three year EPM Roadmap, visit http://bit.ly/StrategyWC to sign up. It’s free and you’ll come away with some hopefully valuable ideas on where to go with performance management at your company.
If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

Virtual Partitions

Jonathan Lewis - Mon, 2016-05-23 06:16

Here’s a story of (my) failure prompted by a recent OTN posting.

The OP wants to use composite partitioning based on two different date columns – the table should be partitioned by range on the first date and subpartitioned by month on the second date. Here’s the (slightly modified) table creation script he supplied:


rem
rem     Script: virtual_partition.sql
rem     Dated:  May 2016
rem

CREATE TABLE M_DTX
(
        R_ID    NUMBER(3),
        R_AMT   NUMBER(5),
        DATE1   DATE,
        DATE2   DATE,
        VC GENERATED ALWAYS AS (EXTRACT(MONTH FROM DATE2))
)
PARTITION BY RANGE (DATE1) interval (numtoyminterval(1,'MONTH'))
SUBPARTITION BY LIST (VC)
        SUBPARTITION TEMPLATE (
                SUBPARTITION M1 VALUES (1),
                SUBPARTITION M2 VALUES (2),
                SUBPARTITION M3 VALUES (3),
                SUBPARTITION M4 VALUES (4),
                SUBPARTITION M5 VALUES (5),
                SUBPARTITION M6 VALUES (6),
                SUBPARTITION M7 VALUES (7),
                SUBPARTITION M8 VALUES (8),
                SUBPARTITION M9 VALUES (9),
                SUBPARTITION M10 VALUES (10),
                SUBPARTITION M11 VALUES (11),
                SUBPARTITION M12 VALUES (12)
        )
        (
        PARTITION M_DTX_2015060100 VALUES LESS THAN (TO_DATE('2015-06-01 00:00:01', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        )
;

There’s nothing particularly exciting about this – until you get to the query requirement – the user wants to query on date1 and date2, and doesn’t know about the virtual month column, e.g. (and, I know that there should be a to_date() or ANSI equivalent here):

SELECT * FROM m_dtx WHERE date1 = trunc(sysdate) AND date2 = '01-Jun-2016';

Now, as a general rule, you don’t expect partition elimination to occur unless the partitioning column appears with a predicate that make elimination possible, so your first response to this query is that it could eliminate on date1, but can’t possibly eliminiate on vc because vc isn’t in the where clause. However it’s possible that the partitioning code might be coded to recognise that the subpartition is on a virtual column that is derived from date2, so perhaps it could generate a new predicate before optimising, for example:

date2 = '01-Jun-2016'  => vc = 6

Unfortunately, your first response is correct – the optimizer doesn’t get this clever, and doesn’t do the sub-partition elimination. Here’s the execution plan from 12.1.0.2 for the sample query, followed by the execution plan when I explicitly add the predicate vc = 6.


SQL_ID  8vk1a05uv16mb, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy')

Plan hash value: 3104206240

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST ALL   |       |     1 |    57 |    15   (7)| 00:00:01 |     1 |    12 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))



SQL_ID  33q012bdhjrpn, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy') and vc
= 6

Plan hash value: 938710559

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |     6 |     6 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))


Note how the predicate vc = 6  doesn’t show up in the predicate section in either case, but the execution plan shows PARTITION LIST ALL at operation 2 when we omit the predicate and PARTITION LIST SINGE when we include it (with suitable values also appearing for Pstart and Pstop). (The cost, by the way, is the cost of scanning a whole (range)partition whether or not the optimizer expects to restrict that scan to just one sub-partition.)

So the optimizer isn’t quite clever enough (yet). BUT … the optimizer can be very clever with constraints, combining constraints with predicates and applying transitive closure to produce new predicates – so maybe we could get the optimizer to do this if we helped it a little bit. Given the table definition supplied I’m going to assume that the date2 column is supposed to be non-null, so let’s add some truthful constraints/declarations to the table definition:


alter table m_dtx modify date2 not null;
alter table m_dtx modify vc  not null;
alter table m_dtx add constraint md_ck_vc check (vc = extract(month from date2));

Alas, this didn’t make any difference to the execution plan. But it did do something surprising to my attempts to load data into the table:


insert into m_dtx (r_id, r_amt, date1, date2)
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        mod(rownum, 1000),
        rownum,
        trunc(sysdate,'yyyy') + dbms_random.value(0,365),
        trunc(sysdate,'yyyy') + dbms_random.value(0,365)
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4
;

insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into (???)

So the array insert with the virtual column doesn’t like the NOT NULL constraint on the virtual column because vc is, presumably, still null when the constraint is checked (though there’s no problem with single row inserts with the values() clause – I wonder what happens with the PL/SQL “FORALL” clause) – so let’s remove the not null constraint on vc and see what happens.


insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.MD_CK_VC) violated

Unsurprisingly, given the fact that Oracle didn’t like the not null constraint, the critical check constraint also fails. This, by the way, is odd because a check constraint should accept a row when the constraint doesn’t evaluate to FALSE, so (a) vc can’t have been evaluated at this point or the constraint would evaluate to TRUE – which is not FALSE, and (b) vc at this point can no longer be null or the constraint would evaluate to NULL – which is not FALSE: so what “value” has vc got that makes the constraint check return FALSE ?

Bottom line:

I can see some scope for an optimizer enhancement that tries to find eliminating predicates from virtual columns; and I think there’s a need for ensuring that we can safely add constraints to virtual columns – after all we might want to create an index on a virtual column and sometimes we need a NOT NULL declaration to ensure that an index-only execution path can be found. Unfortunately I have to end this blog without finding an immediate solution for the OP.

Despite this failure, though, there are cases (as I showed a couple of years ago) where the optimizer in 12c can get clever enough to recognize the connection between a queried date column and the virtual partitioning column based on that date column.


GoldenGate 12.2 Patch 17030189 required Integrated trail format RELEASE 12.2 or later

Michael Dinh - Sun, 2016-05-22 14:26
EXTRACT Abending With OGG-02912 (Doc ID 2091679.1)

Alternate script prvtlmpg.plb (included in the Oracle GoldenGate installation directory) to the mining database to work around this limitation.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ ll prv*
-rw-r-----. 1 oracle oinstall 1272 Dec 28  2010 prvtclkm.plb
-rw-r-----. 1 oracle oinstall 9487 May 27  2015 prvtlmpg.plb
-rw-r-----. 1 oracle oinstall 3263 May 27  2015 prvtlmpg_uninstall.sql
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

The other option in this case would be to request a backport since patch is not available for all database 11g releases.

Implementing work around.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun May 22 15:23:27 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARROW:(SYS@hawk):PRIMARY> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

===========================  WARNING  ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: ggs_admin

Installing workaround...
No errors.
No errors.
No errors.
Installation completed.
ARROW:(SYS@hawk):PRIMARY> exit
Disconnected from Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

oracle@arrow:hawk:/u01/app/oracle/product/11.2.0/se_1/dbs
$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0/se_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/se_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0/se_1/cfgtoollogs/opatch/opatch2016-05-22_15-26-10PM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/se_1/cfgtoollogs/opatch/lsinv/lsinventory2016-05-22_15-26-10PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.
oracle@arrow:hawk:/u01/app/oracle/product/11.2.0/se_1/dbs
$

Create GoldenGate 12.2 Manager

Michael Dinh - Sun, 2016-05-22 14:14

I typically don’t like to see WARNING if I can help it.

Goldengate 12c has some security features to allow/prevent unauthorized access.

Be careful. Incorrect IPADDR or PROG is used will prevent Pump Extract delivery to target server.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ tail -100 ggserr.log
2016-05-22 12:25:07  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:25:07  WARNING OGG-01877  Oracle GoldenGate Manager for Oracle, mgr.prm:  Missing explicit accessrule for server collector.
2016-05-22 12:25:07  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:07  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).
2016-05-22 12:25:09  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:46  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): stop mgr.
2016-05-22 12:25:51  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host [127.0.0.1]:39551 (STOP).
2016-05-22 12:25:51  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:51  WARNING OGG-00938  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager is stopping at user request.
2016-05-22 12:26:00  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:26:00  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #2).
2016-05-22 12:26:00  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120
ACCESSRULE, PROG server, IPADDR *, ALLOW
ACCESSRULE, PROG *, IPADDR *, ALLOW
USERIDALIAS ggs_admin
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
-- AUTOSTART ER *
-- AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60
CHECKMINUTES 5
LAGCRITICALMINUTES 15
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

Create GoldenGate 12.2 Wallet

Michael Dinh - Sun, 2016-05-22 13:44

So what’s different from this post versus other posts? I share my mistakes with you.

Did you know there was a DEFAULT domain? If you didn’t, neither did I and only found out by using

info credentialstore

alter credentialstore add user ggs_admin alias ggs_admin domain admin
USERIDALIAS ggs_admin DOMAIN admin

alter credentialstore add user ggs_admin alias ggs_admin
USERIDALIAS ggs_admin

oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (arrow.localdomain) 1> create wallet

Created wallet at location 'dirwlt'.

Opened wallet at location 'dirwlt'.

GGSCI (arrow.localdomain) 2> add credentialstore

Credential store created in ./dircrd/.

GGSCI (arrow.localdomain) 3> alter credentialstore add user ggs_admin alias ggs_admin domain admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 4> info credentialstore

Reading from ./dircrd/:

No information found in default domain OracleGoldenGate.

Other domains:

admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN <domain>

GGSCI (arrow.localdomain) 5> info credentialstore domain admin

Reading from ./dircrd/:

Domain: admin

Alias: ggs_admin
Userid: ggs_admin

GGSCI (arrow.localdomain) 6> exit


oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> alter credentialstore add user ggs_admin alias ggs_admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 2> info credentialstore

Reading from ./dircrd/:

Default domain: OracleGoldenGate

  Alias: ggs_admin
  Userid: ggs_admin

Other domains:

  admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN 

GGSCI (arrow.localdomain) 3> exit
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

 


TRUNCATEing a Table makes an UNUSABLE Index VALID again

Hemant K Chitale - Sun, 2016-05-22 09:54
Here's something I learned from Jonathan Lewis sometime ago.

If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.

SQL> connect hemant/hemant
Connected.
SQL> drop table target_data purge;

Table dropped.

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> create index target_data_ndx_1
2 on target_data(owner, object_type, object_name);

Index created.

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> col segment_name format a30
SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>
SQL> col index_name format a30
SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


So, I have a VALID Index on my Table.

I now make it UNUSABLE and add rows to it.

SQL> alter index target_Data_ndx_1 unusable;

Index altered.

SQL> select status
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1';

STATUS
--------
UNUSABLE

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 UNUSABLE

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 104

SQL>


Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present.  The Index doesn't "grow" as the Segment doesn't exist.

Let me TRUNCATE the table.

SQL> truncate table target_data;

Table truncated.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE .0625
TARGET_DATA_NDX_1 INDEX .0625

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again.  So inserting rows will update the Index.  My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>


So, repopulating the Table has expanded the Index again.
.
.
.


Categories: DBA Blogs