Feed aggregator

Oracle’s Fashion Technology Makers: Soldering, Sewing, and Storytelling

Oracle AppsLab - Tue, 2016-05-24 07:44

Photo 18-05-2016, 11 14 39

Many hands make light (emitting diodes) work. Oracle Applications User Experience (OAUX) gets down to designing fashion technology (#fashtech) solutions in a fun maker event with a serious research and learning intent. OAUX Senior Director and resident part-time fashion blogger, Ultan “Gucci Translated” O’Broin (@ultan), reports from the Redwood City runway.

Fashion and Technology: What’s New?

Wearable technology is not new. Elizabeth I of England was a regal early adopter. In wearing an “armlet” given to her by Robert Dudley, First Earl of Leicester in 1571, the Tudor Queen set in motion that fusion of wearable technology and style that remains evident in the Fitbits and Apple Watches of today.

Elizabeth I’s device was certainly fly, described as “in the closing thearof a clocke, and in the forepart of the same a faire lozengie djamond without a foyle, hanging thearat a rounde juell fully garnished with dyamondes and a perle pendaunt.”

Regardless of the time we live in, for wearable tech to be successful it has to look good. It’s got to appeal to our sense of fashion. Technologists remain cognizant of involving clothing experts in production and branding decisions. For example, at Google I/O 2016, Google and Levi’s announced an interactive jacket based on the Google Jacquard technology that makes fabric interactive, applied to a Levi’s commuter jacket design.

Fashion Technology Maker Event: The Summer Collection

Misha Vaughan’s (@mishavaughan) OAUX Communications and Outreach team joined forces with Jake Kuramoto’s (@jkuramot) AppsLab (@theappslab) Emerging Tech folks recently in a joint maker event in Oracle HQ to design and
build wearable tech solutions that brought the world of fashion and technology (#fashtech) together.

Julian Orr (@orr_ux) and intern David Xie flash off those word-face smart watches

Julian Orr (@orr_ux) and intern David Xie flash off those word-face smart watches

Tawny Le (@ihearttanie) creates an interactive glove solution for aspiring keyboardists of all sorts.

Tawny Le (@ihearttanie) creates an interactive glove solution for aspiring keyboardists of all sorts.

The event include the creation of interactive light skirts, smart watch word faces, touch-sensitive drum gloves, sound reactive jewelry, and more from the Adafruit wearables collection.

Sarahi Mireles (@sarahimireles) and Ana Tomescu (@annatomescu) work on that fiber-optic pixie skirt.

Sarahi Mireles (@sarahimireles) and Ana Tomescu (@annatomescu) work on that fiber-optic pixie skirt.

The occasion was a hive of activity, with sewing machines, soldering irons, hot-glue guns, Arduino technology, fiber-optic cables, LEDs, 3D printers, and the rest, all in evidence during the production process.

Photo 18-05-2016, 13 14 10 Photo 18-05-2016, 13 03 32

Fashtech events like this also offer opportunities of discovery, as the team found out how interactive synth drum gloves can not only create music, but be used as input devices to write code too. Why limit yourself to one kind of keyboard?

Discovery, design, development: All part of the maker’s day. (L-r) Noel Portugal(@noelportugal), Raymond Xie (@YuhuaXie), and Lancy Silveira ( @LancyS) get ready for the big reveal!

Wearable Tech in the Enterprise: Wi-Fi and Hi-Heels

What does this all this fashioning of solutions mean for the enterprise? Wearable technology is part of the OAUX Glance, Scan, Commit design philosophy, key to that Mobility strategy reflecting our cloud-driven world of work. Smart watches are as much part of the continuum of devices we use interchangeably throughout the day as smart phones, tablets, or laptops are, for example. To coin a phrase from OAUX Group Vice President Jeremy Ashley (@jrwashley) at the recent Maker Faire event, in choosing what best works for us, be it clothing or technology: one size does not fit all.

Maker events such as ours fuel creativity and innovation in the enterprise. They inspire the creation of human solutions using technology, ones that represent a more human way of working.

A distinction between what tech we use and what we wear in work and at home is no longer convenient. We’ve moved from BYOD to WYOD. Unless that wearable tech, a deeply personal device and style statement all in one, reflects our tastes and sense of fashion we won’t use it: unless we’re forced to. The #fashtech design heuristic is: make it beautiful or make it invisible. So, let’s avoid wearables becoming swearables and style that tech, darling!Possibly Related Posts:

On Demand Webcast: Driving Improved Sales Productivity and Customer Engagement with Cloud

WebCenter Team - Tue, 2016-05-24 05:35
The digital age has radically changed sales and customer service, from the way sales reps conduct work to how customers interact with brands. Being able to access information anytime, anywhere is an imperative for sales reps and customers alike.

So, how can you meet these rising expectations to the delight of your account reps as you help boost the bottom line without decimating your IT budget on costly CRM purchases?

This brief webcast discusses how cloud content, tools and processes can improve sales performance, shorten sales cycles, and improve customer engagement. We reveal how cloud tools can transform sales productivity and customer engagement in your enterprise, and ultimately drive revenue. View today!

Recursion with recursive WITH

OraFAQ Articles - Tue, 2016-05-24 04:03

I recently had the opportunity to talk with Tom Kyte (!), and in the course of our conversation, he really made me face up to the fact that the SQL syntax I use every day is frozen in time: I’m not making much use of the analytic functions and other syntax that Oracle has introduced since 8i.

read more

Using DBMS_STREAMS_ADM To Cleanup GoldenGate

Michael Dinh - Tue, 2016-05-24 00:33

This is really messed up. I chose GoldenGate because I did not want to mess around with streams.

When using Integrated Capture or Delivery, then knowing streams is a prerequisites.

Apologies as the format is not pretty.

The QUEUE table was indeed missing and this is what I get for monkeying around.

To resolve the issue –  exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

GGSCI (arrow.localdomain as ggs_admin@hawk) 3> unREGISTER EXTRACT e_hawk DATABASE

2016-05-23 19:16:32  ERROR   OGG-08221  Cannot register or unregister EXTRACT E_HAWK because of the following SQL error: 
OCI Error ORA-24010: QUEUE "GGS_ADMIN"."OGG$Q_E_HAWK" does not exist
ORA-06512: at "SYS.DBMS_APPLY_ADM_INTERNAL", line 468
ORA-06512: at "SYS.DBMS_APPLY_ADM", line 724
ORA-06512: at line 1 (status = 24010).

GGSCI (arrow.localdomain as ggs_admin@hawk) 4> exit


ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

CAPTURE_NAME                   QUEUE_NAME                     QUEUE_OWNER
------------------------------ ------------------------------ ------------------------------
RULE_SET_NAME                  RULE_SET_OWNER                 CAPTURE_USER
------------------------------ ------------------------------ ------------------------------
 START_SCN STATUS   CAPTURED_SCN APPLIED_SCN USE  FIRST_SCN
---------- -------- ------------ ----------- --- ----------
SOURCE_DATABASE
----------------------------------------------------------------------------------------------------
SOURCE_DBID SOURCE_RESETLOGS_SCN SOURCE_RESETLOGS_TIME LOGMINER_ID NEGATIVE_RULE_SET_NAME
----------- -------------------- --------------------- ----------- ------------------------------
NEGATIVE_RULE_SET_OWNER        MAX_CHECKPOINT_SCN REQUIRED_CHECKPOINT_SCN LOGFILE_ STATUS_CH
------------------------------ ------------------ ----------------------- -------- ---------
ERROR_NUMBER
------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
VERSION                                                          CAPTURE_TY LAST_ENQUEUED_SCN
---------------------------------------------------------------- ---------- -----------------
CHECKPOINT_RETENTION_TIME
-------------------------
START_TIME                                                                  PURPOSE
--------------------------------------------------------------------------- -------------------
CLIENT_NAME
----------------------------------------------------------------------------------------------------
CLIENT_S OLDEST_SCN FILTERED_SCN
-------- ---------- ------------
OGG$CAP_E_HAWK                 OGG$Q_E_HAWK                   GGS_ADMIN
                                                              GGS_ADMIN
    256229 DISABLED       346591      346586 NO      256229
HAWK
 3171223736                    1             912525304           3
                                           346420                  346586 IMPLICIT 23-MAY-16


11.2.0.4.0                                                       LOCAL
                        0
22-MAY-16 04.21.31.000000 PM                                                GoldenGate Capture
E_HAWK
DISABLED     346586       255600


ARROW:(SYS@hawk):PRIMARY> exec DBMS_CAPTURE_ADM.STOP_CAPTURE('OGG$CAP_E_HAWK');

PL/SQL procedure successfully completed.

ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

CAPTURE_NAME                   QUEUE_NAME                     QUEUE_OWNER
------------------------------ ------------------------------ ------------------------------
RULE_SET_NAME                  RULE_SET_OWNER                 CAPTURE_USER
------------------------------ ------------------------------ ------------------------------
 START_SCN STATUS   CAPTURED_SCN APPLIED_SCN USE  FIRST_SCN
---------- -------- ------------ ----------- --- ----------
SOURCE_DATABASE
----------------------------------------------------------------------------------------------------
SOURCE_DBID SOURCE_RESETLOGS_SCN SOURCE_RESETLOGS_TIME LOGMINER_ID NEGATIVE_RULE_SET_NAME
----------- -------------------- --------------------- ----------- ------------------------------
NEGATIVE_RULE_SET_OWNER        MAX_CHECKPOINT_SCN REQUIRED_CHECKPOINT_SCN LOGFILE_ STATUS_CH
------------------------------ ------------------ ----------------------- -------- ---------
ERROR_NUMBER
------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
VERSION                                                          CAPTURE_TY LAST_ENQUEUED_SCN
---------------------------------------------------------------- ---------- -----------------
CHECKPOINT_RETENTION_TIME
-------------------------
START_TIME                                                                  PURPOSE
--------------------------------------------------------------------------- -------------------
CLIENT_NAME
----------------------------------------------------------------------------------------------------
CLIENT_S OLDEST_SCN FILTERED_SCN
-------- ---------- ------------
OGG$CAP_E_HAWK                 OGG$Q_E_HAWK                   GGS_ADMIN
                                                              GGS_ADMIN
    256229 DISABLED       346591      346586 NO      256229
HAWK
 3171223736                    1             912525304           3
                                           346420                  346586 IMPLICIT 23-MAY-16


11.2.0.4.0                                                       LOCAL
                        0
22-MAY-16 04.21.31.000000 PM                                                GoldenGate Capture
E_HAWK
DISABLED     346586       255600


ARROW:(SYS@hawk):PRIMARY>


ARROW:(SYS@hawk):PRIMARY> exec DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

PL/SQL procedure successfully completed.

ARROW:(SYS@hawk):PRIMARY> select * from dba_capture;

no rows selected

ARROW:(SYS@hawk):PRIMARY>

HIUG Interact 2016 Agenda

Jim Marion - Mon, 2016-05-23 16:30

In a couple of weeks, I will be presenting the following sessions at the HIUG Interact 2016 conference in San Antonio

  • 16165 : PeopleSoft Fluid User Interface – Deep Dive: Grand Oaks D, Mon, Jun 13, 2016 (03:15 PM - 04:15 PM)
  • 16164 : PeopleTools Tips & Techniques: Grand Oaks D, Tue, Jun 14, 2016 (02:30 PM - 03:30 PM)
  • 16163 : Tech Clinic: Application Designer Grand Oaks D, Wed, Jun 15, 2016 (12:30 PM - 02:30 PM)

database management

Pat Shuff - Mon, 2016-05-23 14:52
Today we are going to look at managing an Oracle database. We are going to start with a 12c database that we created in the Oracle Public Cloud. We selected database as a service (as opposed to virtual image), monthly billing, 12c, and enterprise edition high performance edition. We accepted the defaults for the table size so that we can figure out how to extend the table size and selected no backups rather than starting RMAN for daily incrementals or cloud object storage for weekly full backups.

We basically have four options for managing a database. If we have a small number of databases we might look at using the sqlplus sysdba command line access and grind through administration. We also have a database monitor that is installed by default with the database cloud service. We can dive into this database through the monitor and look at log running queries, tablespace sizes, and generic utilization. We can also connect with sql developer and look at the new DBA interfaces that were added in the latest release in early 2016. The fourth and final way of administering is to look at commercial management tools like Oracle Enterprise Manager (OEM) or other tools that aggregate multiple systems and servers and give you exposure beyond just the database. These commercial tools allow you to look at they layer that you are most interested in. You can get a PeopleSoft Management Pack for OEM that allows you to look at purchase order flow, or payroll requests. You can get diagnostics and tuning packs for the application server and database that allows you to look at what part of the PeopleSoft implementation is taking the longest. Is it the network connection? It is a poorly tuned Java Virtual Machine that is memory thrashing? It is a sql statement that is waiting on a lock? Is it a storage spindle that is getting hammered from another application? Is it a run away process on your database server that is consuming all of the resources? All of these questions can be answered with a monitoring tool if you not only know how to use it but what is available for free and what you need to purchase to get the richer and more valuable information.

To get to the database monitor we go to the cloud services console (which changed over the weekend so it looks a little different), click on database, click on Service Console, and click on the database name.

If we click on the dbaas_monitor menu item in the hamburger menu system to the right of the service name it might fail to connect the first time. It will take the ip address of the database and try to open https://ip address/dbaas_monitor. We first need to open up port 443 to be able to communicate to this service.

To get to the network connection we need to go to the Compute Service Monitor, click on the Network tab, and change the proper port number for our server prs12cHP. If we hover over the labels on the left we see what ports we are looking for. We are specifically interested in the https protocol. If we click on the hamburger menu next to this line item we can Update the security list which pops up a new window.

To enable this protocol we enable the service and click the Update button. Once we do this we can retry the dbaas_monitor web page. We should expect a security exception the first time and need to add an exception. We login as dbaas_monitor and the password that we entered in the bottom left of the screen for the system passwords when we created the database.

At this point we can look at cpu utilization, table space usage, if the database is running, and all other monitoring capabilities. Below are the screen shots for the listener and the table sizes and storage by pluggable database.

We can look a little deeper at things like alerts, wait times, and real time sql monitoring. These are all available through command line but providing a service like this allows junior database administrators to look at stuff quickly and easily.

The biggest drawback to this system is that you get a short snapshot and not a long term historic archive of this data. If we use Enterprise Manager, which we will look at in a later blog, from a central site we collect the data in a local repository we can look back at months old data rather than live or data from the past few hours.

In summary, if we use platform as a service, we get tooling and reporting tools integrated into services rather than having to spin these up or look at everything from the command line as is done with infrastructure as a service. We get other features but we are diving into database monitoring this week. We briefly touched on database monitoring through what was historically called dbmonitor and is moving towards dbaas_monitor or a central enterprise manager pane of glass for database services in our data center and in the cloud. One of the key differentials from Oracle Database as a Service and Amazon RDS is database monitoring. We will look at database monitoring for Amazon RDS later this week and notice there are significant differences.

Analytics in Search

PeopleSoft Technology Blog - Mon, 2016-05-23 13:06

Content contributed by Balaji Pattabhiraman

In two recent posts (creating simplified analytics--user perspective and creating analytic home pages), we’ve described some of the capabilities of Simplified Analytics.  In this post, we’ll examine how analytics and charting can be used to enhance search. 

Searching is an important part of the PeopleSoft UI, and we are continually improving it.  For example, we included related actions in search pages, enabling users to act on data directly from the search results without even having to navigate to transactions.  We also included commercial search features like facets and filtering to focus result sets.  Following that, we provided search in the Fluid UI, making search easier to use across form factors, including phones and tablets.

Now as part of Fluid Component Search, we have enabled analytics which helps users visualize and understand their results better, so they can act on those results more effectively.  Let’s examine what that looks like and how to enable it.

Analytics on a Search Page

First, recall what a classic component search page looked like.



It’s useful, but not as powerful as it could be.  As of PeopleTools 8.54 and higher we have transformed the search experience for all Fluid Components. 

Here is an example of a Contracts search.  In this example, the user can look at the Gross Contract Amount by Supplier in addition to searching for the contract.  Notice how the search page is transformed in Fluid. The left panel provides filters to narrow the results or initiate a new search. Here we have some Administrator and Contract Style filters applied. You can select a row of detail to open the page for that contract detail after narrowing the search. In this case, Gross Amount by Supplier Name.  Applying a filter will update the analytics as well as updating the search result grid because the analytics reflect live data.



Analytic charts are available for this page, so the user switches on the chart slider in the upper right of the page.



Here is the same search result page with the analytic turned on.



The charts are interactive like any Fluid Pivot Grid.  This means that you can select any data point and drill down using related actions.  Here we view the details by another field for the selected supplier (For example view contract date vs amount for a supplier).  You can also narrow the results for the supplier from the chart data point by selecting the Detailed View option.   There are more options under the gear menu in the upper right.  This lets you change chart options, download data to spread sheets, etc.



Adding Analytics to a Search

Now let’s see how to build a Fluid component search to include Pivot Grids.  Open the component properties.  (You would have already enabled the Fluid Flag while building the fluid component.)  In this case we’ll set the Search Type to Standard.  The other option for Search Type is Master-Detail, which enables you to persist the search results on the left panel after selecting a search result and opening the transaction.  This allows easy navigation between the search results without navigating back to the search page.   (You would also have the search/add record filled for the component, which is similar to any classic component with search.)



Next, open the Pivot Grid wizard and create a new pivot grid model.

In step 1, give the pivot grid a title and, optionally, a description.  You can also set the Type and Owner here.

In step 2, choose the data source type.  Here we’ve set the type as Component.  Then choose your component.  Also, choose a tree name and access group you generally use with the product line.  In this case we’ve chosen the Purchase Order access group.  This will ensure a query can be created from the search record and the end users of the component can see the search results. (Note: The search record will be added as part of this tree and the permissions from the component are added to the tree).  When we click Apply, this creates a query with the same name as the component and the fields will be listed.  Now we choose the fields that we want the end-user to see. 

In the step 3, we set overriding labels for the fields and choose the column type--whether a field is only for display in the grid or whether it should be part of the chart axis or the value the chart plots.  In addition, all the key fields become prompts for the pivot grid model automatically.  We can set default values for these prompts.  Note that you can change the prompt fields and change the criteria by modifying the generated query.  In most cases, the key fields serving as prompts will suffice. You can also choose which prompts to show or hide to the end-user in this step.

In step 4, we specify axis information including which fields will be the x-axis, y-axis and the filters.  In addition, under the Fluid Mode Options, we can fill the list view Title and Summary fields to show if your component is to be used on small form factors like a phone. 

You can optionally preview the chart in the last step and save the pivot grid definition.  This completes the setup.
Now when a user navigates to the component, the search page will show up based on the configuration.   (Typically a user might navigate from a home page tile, registered using the tile wizard.)

Now you can open the Search page by selecting the Group and Special Contracts tile.  You can view it with or without the chart.


Because this uses the PeopleSoft Fluid User Interface, the search page renders nicely on smaller form factors like smart phones and tablets.  This requires no additional configuration.  Here is the same page on a smart phone.



7 Signs Your EPM is Lagging Behind Your Competition

Look Smarter Than You Are - Mon, 2016-05-23 13:06
Regardless of industry, regardless of size, regardless of duration, all companies have similar issues in their financial analysis, planning, and consolidation areas. From building budgets to financial reporting, how can CFOs, VPs of Finance, Directors of FP&A and Controllers tell if their FP&A teams are falling behind their competitors? Here are seven signs that your Enterprise Performance Management (EPM) environments are stuck in the last decade:
  1. Strategy is planned verbally or in spreadsheets. While the majority of strategic CFO’s agree that Finance should be looking forward and not backward, most strat planning is done in Excel or worse, out loud in various meetings. There is no modeling unless someone comes up with a bunch of linked spreadsheet formulas. Strategies are agreed to in conference rooms and conveyed at a high-level via email (or they aren’t communicated at all). Strategies are evaluated by whomever has the best anecdote: “well, the last time that happened, we did this…” The only thing worse than not having a solution for strategic planning is not doing strategic planning at all. Speaking of spreadsheets…
  2. Excel is the key enabling technology in your FP&A department. One sure way to tell if your EPM function is falling behind is to ask “what is the single most important tool your department uses when running reports? Performing analysis? Coming up with a strategic plan? Preparing the budget? Modeling business changes?” If the answer to four-out-of-five of those is “Microsoft Excel”, ask yourself if that was by design or if people just used Excel because they didn’t have a better system. Excel is a wonderful tool (I open it every morning and don’t close it until I leave), but it was meant to be a way to look at grids of data. It was not meant to store business logic and it was never meant to be a database. Force your FP&A group to do everything with Excel and expect to be waiting for every answer… and then praying everyone got their formulas right when you make business decisions based on those answers.
  3. There is only one version of the budget. No one really thinks that there’s only one way that the year will end up, but most companies insist on a single version of a budget (and not even a range, but a specific number). Not only are EPM Laggards (companies with EPM trailing behind their peer groups) not planning multiple scenarios, they’re insisting that the whole company come up with a single number and then stick to it no matter what external factors are at play. Ron Dimon refers to scenario plans as “ready at hand plans” waiting to be used once we see how our strategic initiatives are enacted. EPM Laggards not only don’t have additional plans ready, they insist on holding everyone in the organization accountable to one single number, outside world be damned.
  4. Budgets favor precision over timeliness. Your competition realizes that a forecast that’s 95% accurate delivered today is more helpful than a budget that was 98% accurate 6 months ago. Yet EPM Laggards spend months coming up with a budget that’s precise to the dollar and then updating it periodically at a high level. It’s amazing how often FP&A groups end up explaining away budget vs. actual discrepancies by saying “the budget was accurate at the start of the year, but then things happened.” Budgets should be reforecasted continuously whenever anything material changes. Think about it: if you had one mapping app that gave you an estimate of your arrival time to the 1/100th of a second at the time you departed and another mapping app that constantly refined your arrival time as you drove, which one would you choose?
  5. No one takes actions on the reports. Edward’s Rule of Reporting: every report should either lead to a better question or a physical action. If your department is producing a report that doesn’t lead someone to ask a bigger, better, bolder question and doesn’t lead someone to take a physical action, change the report. Or stop producing the report entirely. EPM Laggards spend an inordinate amount of time collecting data and generating reports that don’t lead to any change in behavior. EPM Leaders periodically stop and ask themselves “if I arrived today, is this what I would build?” Half the time, the answer is “no,” and the other half the time, the answer is “if I arrived today, I actually wouldn’t build this report at all.”
  6. Most time is spent looking backwards. Imagine you’re driving a car. Put your hands on the wheel and look around. Notice that most of your visual space is the front windshield which shows you what’s coming up ahead of you. Some of what you see is taken up by the dashboard so you can get a real-time idea of where you are right now. And if you glance up, there’s a small rear-view mirror that tells you what’s behind you. A combination of all three of these (windshield, dashboard, and rearview mirror) gives you some idea of when you should steer right or left, brake, or accelerate. In a perfect EPM world, your time would be divided the same way: most would be spent looking ahead (budgeting and forecasting), some time would be spent glancing down to determine where you are at the moment, and very little would be spent looking backwards since, let’s face it, the past is really difficult to change. In your car, you’d only look at the mirror if you were changing lanes or you were worried about being hit from behind, and business is similar yet most EPM Laggards drive their cars by looking backwards.
  7. Labor is devoted to collecting & reporting and not planning & analyzing. If you spend all of your time gathering data, reconciling data, and reporting on data, you’re answering the question “what happened?” Your competition is spending their time analyzing (“why did this happen?”) and then planning to take action (“what should I do next?”). There is a finite amount of time in the world and sadly, that holds true in our FP&A departments too. If your EPM system is focused on collecting, consolidating, & reporting and your competition has their EPM focused on analyzing, modeling, & planning, who do you think will win in the long run?


What You Can Do
If you look at those seven top signs you’re lagging in your EPM functions and wonder how to improve, the first step is to stop building anything new. While this seems counterintuitive, if you take a tactical approach to solving any one area, you’re going to put in place a single point solution that will need to be thrown away or redone as you get closer to your overall vision for EPM. So what’s step 1? Have an EPM vision. Ask yourself where you want your company to be in three years. What do you want out of consolidation, reporting, analysis, modeling, and planning and how will all of those functions be integrated?

You are not alone. I have seen hundreds of FP&A departments in my time struggle with having a vision for just one area let alone a long-range vision. Even when leadership has a vision, it quite often focuses on system improvements (we’re not sure what to do, so let’s throw technology at it!) rather than try to improve processes too. Thankfully, there is hope and as my good friends at G.I. Joe always say, knowing is half the battle.

More Information
Wednesday, May 25, at 1PM Eastern, I’m holding a webcast to share lessons I’ve learned over the years on how to turn EPM Laggards into EPM Leaders. If you want help coming up with your three year EPM Roadmap, visit http://bit.ly/StrategyWC to sign up. It’s free and you’ll come away with some hopefully valuable ideas on where to go with performance management at your company.

If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

Virtual Partitions

Jonathan Lewis - Mon, 2016-05-23 07:16

Here’s a story of (my) failure prompted by a recent OTN posting.

The OP wants to use composite partitioning based on two different date columns – the table should be partitioned by range on the first date and subpartitioned by month on the second date. Here’s the (slightly modified) table creation script he supplied:


rem
rem     Script: virtual_partition.sql
rem     Dated:  May 2016
rem

CREATE TABLE M_DTX
(
        R_ID    NUMBER(3),
        R_AMT   NUMBER(5),
        DATE1   DATE,
        DATE2   DATE,
        VC GENERATED ALWAYS AS (EXTRACT(MONTH FROM DATE2))
)
PARTITION BY RANGE (DATE1) interval (numtoyminterval(1,'MONTH'))
SUBPARTITION BY LIST (VC)
        SUBPARTITION TEMPLATE (
                SUBPARTITION M1 VALUES (1),
                SUBPARTITION M2 VALUES (2),
                SUBPARTITION M3 VALUES (3),
                SUBPARTITION M4 VALUES (4),
                SUBPARTITION M5 VALUES (5),
                SUBPARTITION M6 VALUES (6),
                SUBPARTITION M7 VALUES (7),
                SUBPARTITION M8 VALUES (8),
                SUBPARTITION M9 VALUES (9),
                SUBPARTITION M10 VALUES (10),
                SUBPARTITION M11 VALUES (11),
                SUBPARTITION M12 VALUES (12)
        )
        (
        PARTITION M_DTX_2015060100 VALUES LESS THAN (TO_DATE('2015-06-01 00:00:01', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        )
;

There’s nothing particularly exciting about this – until you get to the query requirement – the user wants to query on date1 and date2, and doesn’t know about the virtual month column, e.g. (and, I know that there should be a to_date() or ANSI equivalent here):

SELECT * FROM m_dtx WHERE date1 = trunc(sysdate) AND date2 = '01-Jun-2016';

Now, as a general rule, you don’t expect partition elimination to occur unless the partitioning column appears with a predicate that make elimination possible, so your first response to this query is that it could eliminate on date1, but can’t possibly eliminiate on vc because vc isn’t in the where clause. However it’s possible that the partitioning code might be coded to recognise that the subpartition is on a virtual column that is derived from date2, so perhaps it could generate a new predicate before optimising, for example:

date2 = '01-Jun-2016'  => vc = 6

Unfortunately, your first response is correct – the optimizer doesn’t get this clever, and doesn’t do the sub-partition elimination. Here’s the execution plan from 12.1.0.2 for the sample query, followed by the execution plan when I explicitly add the predicate vc = 6.


SQL_ID  8vk1a05uv16mb, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy')

Plan hash value: 3104206240

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST ALL   |       |     1 |    57 |    15   (7)| 00:00:01 |     1 |    12 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))



SQL_ID  33q012bdhjrpn, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy') and vc
= 6

Plan hash value: 938710559

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |     6 |     6 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))


Note how the predicate vc = 6  doesn’t show up in the predicate section in either case, but the execution plan shows PARTITION LIST ALL at operation 2 when we omit the predicate and PARTITION LIST SINGE when we include it (with suitable values also appearing for Pstart and Pstop). (The cost, by the way, is the cost of scanning a whole (range)partition whether or not the optimizer expects to restrict that scan to just one sub-partition.)

So the optimizer isn’t quite clever enough (yet). BUT … the optimizer can be very clever with constraints, combining constraints with predicates and applying transitive closure to produce new predicates – so maybe we could get the optimizer to do this if we helped it a little bit. Given the table definition supplied I’m going to assume that the date2 column is supposed to be non-null, so let’s add some truthful constraints/declarations to the table definition:


alter table m_dtx modify date2 not null;
alter table m_dtx modify vc  not null;
alter table m_dtx add constraint md_ck_vc check (vc = extract(month from date2));

Alas, this didn’t make any difference to the execution plan. But it did do something surprising to my attempts to load data into the table:


insert into m_dtx (r_id, r_amt, date1, date2)
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        mod(rownum, 1000),
        rownum,
        trunc(sysdate,'yyyy') + dbms_random.value(0,365),
        trunc(sysdate,'yyyy') + dbms_random.value(0,365)
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4
;

insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into (???)

So the array insert with the virtual column doesn’t like the NOT NULL constraint on the virtual column because vc is, presumably, still null when the constraint is checked (though there’s no problem with single row inserts with the values() clause – I wonder what happens with the PL/SQL “FORALL” clause) – so let’s remove the not null constraint on vc and see what happens.


insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.MD_CK_VC) violated

Unsurprisingly, given the fact that Oracle didn’t like the not null constraint, the critical check constraint also fails. This, by the way, is odd because a check constraint should accept a row when the constraint doesn’t evaluate to FALSE, so (a) vc can’t have been evaluated at this point or the constraint would evaluate to TRUE – which is not FALSE, and (b) vc at this point can no longer be null or the constraint would evaluate to NULL – which is not FALSE: so what “value” has vc got that makes the constraint check return FALSE ?

Bottom line:

I can see some scope for an optimizer enhancement that tries to find eliminating predicates from virtual columns; and I think there’s a need for ensuring that we can safely add constraints to virtual columns – after all we might want to create an index on a virtual column and sometimes we need a NOT NULL declaration to ensure that an index-only execution path can be found. Unfortunately I have to end this blog without finding an immediate solution for the OP.

Despite this failure, though, there are cases (as I showed a couple of years ago) where the optimizer in 12c can get clever enough to recognize the connection between a queried date column and the virtual partitioning column based on that date column.


Oracle CREATE TABLE Syntax and Examples – The Complete Guide

Complete IT Professional - Mon, 2016-05-23 06:00
Creating database tables in Oracle is one of the most common tasks an Oracle developer or Oracle DBA does. Learn how to create tables, what the syntax is, and see some examples in this article. What Is The Create Table Command Used For? The CREATE TABLE command is used to create a database table. It […]
Categories: Development

GoldenGate 12.2 Patch 17030189 required Integrated trail format RELEASE 12.2 or later

Michael Dinh - Sun, 2016-05-22 15:26
EXTRACT Abending With OGG-02912 (Doc ID 2091679.1)

Alternate script prvtlmpg.plb (included in the Oracle GoldenGate installation directory) to the mining database to work around this limitation.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ ll prv*
-rw-r-----. 1 oracle oinstall 1272 Dec 28  2010 prvtclkm.plb
-rw-r-----. 1 oracle oinstall 9487 May 27  2015 prvtlmpg.plb
-rw-r-----. 1 oracle oinstall 3263 May 27  2015 prvtlmpg_uninstall.sql
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

The other option in this case would be to request a backport since patch is not available for all database 11g releases.

Implementing work around.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun May 22 15:23:27 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARROW:(SYS@hawk):PRIMARY> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

===========================  WARNING  ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: ggs_admin

Installing workaround...
No errors.
No errors.
No errors.
Installation completed.
ARROW:(SYS@hawk):PRIMARY> exit
Disconnected from Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

oracle@arrow:hawk:/u01/app/oracle/product/11.2.0/se_1/dbs
$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0/se_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/se_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0/se_1/cfgtoollogs/opatch/opatch2016-05-22_15-26-10PM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/se_1/cfgtoollogs/opatch/lsinv/lsinventory2016-05-22_15-26-10PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.
oracle@arrow:hawk:/u01/app/oracle/product/11.2.0/se_1/dbs
$

Create GoldenGate 12.2 Manager

Michael Dinh - Sun, 2016-05-22 15:14

I typically don’t like to see WARNING if I can help it.

Goldengate 12c has some security features to allow/prevent unauthorized access.

Be careful. Incorrect IPADDR or PROG is used will prevent Pump Extract delivery to target server.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ tail -100 ggserr.log
2016-05-22 12:25:07  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:25:07  WARNING OGG-01877  Oracle GoldenGate Manager for Oracle, mgr.prm:  Missing explicit accessrule for server collector.
2016-05-22 12:25:07  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:07  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).
2016-05-22 12:25:09  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:46  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): stop mgr.
2016-05-22 12:25:51  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host [127.0.0.1]:39551 (STOP).
2016-05-22 12:25:51  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:51  WARNING OGG-00938  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager is stopping at user request.
2016-05-22 12:26:00  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:26:00  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #2).
2016-05-22 12:26:00  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120
ACCESSRULE, PROG server, IPADDR *, ALLOW
ACCESSRULE, PROG *, IPADDR *, ALLOW
USERIDALIAS ggs_admin
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
-- AUTOSTART ER *
-- AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60
CHECKMINUTES 5
LAGCRITICALMINUTES 15
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

Create GoldenGate 12.2 Wallet

Michael Dinh - Sun, 2016-05-22 14:44

So what’s different from this post versus other posts? I share my mistakes with you.

Did you know there was a DEFAULT domain? If you didn’t, neither did I and only found out by using

info credentialstore

alter credentialstore add user ggs_admin alias ggs_admin domain admin
USERIDALIAS ggs_admin DOMAIN admin

alter credentialstore add user ggs_admin alias ggs_admin
USERIDALIAS ggs_admin

oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (arrow.localdomain) 1> create wallet

Created wallet at location 'dirwlt'.

Opened wallet at location 'dirwlt'.

GGSCI (arrow.localdomain) 2> add credentialstore

Credential store created in ./dircrd/.

GGSCI (arrow.localdomain) 3> alter credentialstore add user ggs_admin alias ggs_admin domain admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 4> info credentialstore

Reading from ./dircrd/:

No information found in default domain OracleGoldenGate.

Other domains:

admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN <domain>

GGSCI (arrow.localdomain) 5> info credentialstore domain admin

Reading from ./dircrd/:

Domain: admin

Alias: ggs_admin
Userid: ggs_admin

GGSCI (arrow.localdomain) 6> exit


oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> alter credentialstore add user ggs_admin alias ggs_admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 2> info credentialstore

Reading from ./dircrd/:

Default domain: OracleGoldenGate

  Alias: ggs_admin
  Userid: ggs_admin

Other domains:

  admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN 

GGSCI (arrow.localdomain) 3> exit
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

 


TRUNCATEing a Table makes an UNUSABLE Index VALID again

Hemant K Chitale - Sun, 2016-05-22 10:54
Here's something I learned from Jonathan Lewis sometime ago.

If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.

SQL> connect hemant/hemant
Connected.
SQL> drop table target_data purge;

Table dropped.

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> create index target_data_ndx_1
2 on target_data(owner, object_type, object_name);

Index created.

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> col segment_name format a30
SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>
SQL> col index_name format a30
SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


So, I have a VALID Index on my Table.

I now make it UNUSABLE and add rows to it.

SQL> alter index target_Data_ndx_1 unusable;

Index altered.

SQL> select status
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1';

STATUS
--------
UNUSABLE

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 UNUSABLE

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 104

SQL>


Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present.  The Index doesn't "grow" as the Segment doesn't exist.

Let me TRUNCATE the table.

SQL> truncate table target_data;

Table truncated.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE .0625
TARGET_DATA_NDX_1 INDEX .0625

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again.  So inserting rows will update the Index.  My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>


So, repopulating the Table has expanded the Index again.
.
.
.


Categories: DBA Blogs

Video : Indexing JSON Data in Oracle Database 12c

Tim Hall - Sun, 2016-05-22 04:25

Following on from last week’s post, today’s video is about indexing JSON data in Oracle Database 12c.

If videos aren’t your thing, you might want to read these articles, which the videos are based on.

The cameo in this video comes courtesy of Bertrand Drouvot, who was a silent extra in the previous video too. 

duplicate to a future date

Laurent Schneider - Sat, 2016-05-21 08:48

If you work with large databases, you often wait way to long for the clones. Typically you want to duplicate a 10TB database to production timestamp 9am, and you start at 9am and then you wait for hours.

Is it possible to start the clone, let’s say, at midnight, and set until time 9am?

No! You’ll get

RMAN-06617: UNTIL TIME (2016-05-21 09:00:00) is ahead of last NEXT TIME in archived logs (2016-05-20 23:58:52)

But… you could start to restore the datafiles at midnight.

sqlplus sys/***@db02 as sysdba <<EOF
  alter system set db_name='DB01' scope=spfile;
  alter system set db_unique_name='DB02' scope=spfile;
  startup force nomount
EOF

rman target sys/***@db01 auxiliary sys/***@db02 <<EOF
   restore clone primary controlfile;
   alter clone database mount;

run {
   set newname for datafile  1 to
 "/db02/system01.dbf";
   set newname for datafile  2 to
 "/db02/sysaux01.dbf";
   set newname for datafile  3 to
 "/db02/undotbs1_02.dbf";
   set newname for datafile  4 to
 "/db02/users01.dbf";
   restore clone database
   ;
}
EOF

This is exactly when RMAN does when you issue a duplicate. You could use the supported RESTORE command instead of the unsupported RESTORE CLONE command. But then it’ll get a bit more complex as you need to find out the location of your backup and so on.

At 9am, you issue your duplicate, and you’ll see

skipping datafile 1; already restored to file /db02/system01.dbf
skipping datafile 2; already restored to file /db02/sysaux01.dbf
skipping datafile 3; already restored to file /db02/undotbs1_02.dbf
skipping datafile 4; already restored to file /db02/users01.dbf

You just saved nine hours &#x1f642;

“What do you mean there’s line breaks in the address?” said SQLLDR

RDBMS Insight - Fri, 2016-05-20 19:55

I had a large-ish CSV to load and a problem: line breaks inside some of the delimited fields.

Like these two records:

one, two, "three beans", four
five, six, "seven
beans", "eight wonderful beans"

SQL Loader simply won’t handle this, as plenty of sad forum posts attest. The file needs pre-processing and here is a little python script to do it, adapted from Jmoreland91’s solution on Stack Overflow.

import sys, csv, os
 
def hrtstrip (inputfile,outputfile,newtext):
    print("Input file " + inputfile)
    print("Output file " + outputfile)
    with open(inputfile, "r") as input:
       with open(outputfile, "w") as output:
          w = csv.writer(output, delimiter=',', quotechar='"', 
quoting=csv.QUOTE_NONNUMERIC, lineterminator='\n')
          for record in csv.reader(input):
             w.writerow(tuple(s.replace("\n", newtext) for s in record))
    print("All done")

Thanks to Jmoreland91 for this. If you use it, give him an updoot.

edit – Jason Bucata (@tech31842) tweeted me another StackOverflow with a number of scripts in assorted languages: http://stackoverflow.com/questions/33994244/how-to-remove-newlines-inside-csv-cells-using-regex-terminal-tools

Categories: DBA Blogs

Application Development Platform (Java Cloud, Application Container Cloud, Developer Cloud) New Release 16.2.1

Oracle Cloud has already moved to the latest 16.2.1 release of Application Development platform with Java Cloud Service,  Application Container Cloud Service, and ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator