Feed aggregator

Goldengate 12c runInstaller UNIX_GROUP_NAME | INVENTORY_LOCATION

Michael Dinh - Tue, 2017-02-14 08:20

I keep finding ways to break things because the environments are not consistent.

UNIX_GROUP_NAME
This parameter is the UNIX group name to be set for the inventory on UNIX platforms.
Note: The UNIX group name is used for first-time installations only.

Details for oraInventory

$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall

Notice the group is dba while /etc/oraInst.loc has oinstall.
Probably because I specified UNIX_GROUP_NAME=dba?
Don’t remembered since this environment is really old.

$ ls -ld /u01/app/oracle/oraInventory
drwxrwx---. 6 oracle dba 4096 Feb 14 04:50 /u01/app/oracle/oraInventory

What happens when INVENTORY_LOCATION is not the same as /etc/oraInst.loc ?

oracle@arrow1:HAWKA:/media/sf_OracleSoftware/GoldenGate_12.2.0.1/fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller -silent -showProgress -waitforcompletion INSTALL_OPTION=ORA11g SOFTWARE_LOCATION=/u01/app/12.2.0.1/ggs03 UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/u01/app/oraInventory

++++++++++

oracle@arrow1:HAWKA:/home/oracle
$ ls -ld /u01/app/oraInventory
drwxrwx---. 5 oracle oinstall 4096 Feb 14 05:18 /u01/app/oraInventory
oracle@arrow1:HAWKA:/home/oracle
$

++++++++++

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03
$ cd /u01/app/12.2.0.1/ggs03/OPatch/

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ env|grep HOME
GG_HOME=/u01/app/12.2.0.1/ggs01
HOME=/home/oracle

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ export GG_HOME=/u01/app/12.2.0.1/ggs03

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ ./opatch lsinventory -details
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/12.2.0.1/ggs03
Central Inventory : /u01/app/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.3.0
Log file location : /u01/app/12.2.0.1/ggs03/cfgtoollogs/opatch/opatch2017-02-14_05-21-32AM.log

List of Homes on this system:

Home name= OraGI12Home1, Location= "/u01/app/12.1.0.2/grid"
Home name= OraDB12Home1, Location= "/u01/app/oracle/product/12.1.0.2/db_1"
Home name= OraDb11g_home1, Location= "/u01/app/oracle/product/11.2.0.4/db_1"
Home name= OraHome1, Location= "/u01/app/12.2.0.1/ggs01"
Home name= OraHome2, Location= "/u01/app/12.2.0.1/ggs02"
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73
oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$

When INVENTORY_LOCATION is different from /etc/oraInst.loc, use -invPtrLoc.

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ ./opatch lsinventory -details -invPtrLoc /u01/app/oraInventory/oraInst.loc
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/12.2.0.1/ggs03
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oraInventory/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /u01/app/12.2.0.1/ggs03/cfgtoollogs/opatch/opatch2017-02-14_05-44-55AM.log

Lsinventory Output file location : /u01/app/12.2.0.1/ggs03/cfgtoollogs/opatch/lsinv/lsinventory2017-02-14_05-44-55AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle GoldenGate Core                                               12.2.0.0.0
There are 1 products installed in this Oracle Home.


Installed Products (6):

Installer SDK Component                                              11.2.0.3.0
Java Development Kit                                                 1.6.0.65.0
Oracle GoldenGate Core                                               12.2.0.0.0
Oracle GoldenGate for Oracle 11g                                     12.2.0.0.0
Oracle One-Off Patch Installer                                       11.2.0.1.7
Oracle Universal Installer                                           11.2.0.3.0
There are 6 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.
oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$

Just out of curiosity, what’s the difference between ORA11g|ORA12c?

oracle@arrow1:HAWKA:/media/sf_OracleSoftware/GoldenGate_12.2.0.1/fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller -silent -showProgress -waitforcompletion INSTALL_OPTION=ORA12c SOFTWARE_LOCATION=/u01/app/12.2.0.1/ggs03 UNIX_GROUP_NAME=dba INVENTORY_LOCATION=/u01/app/oraInventory2
Starting Oracle Universal Installer...

++++++++++

oracle@arrow1:HAWKA:/home/oracle
$ cd /u01/app/12.2.0.1/ggs03/OPatch/

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ cat /u01/app/oraInventory2/oraInst.loc
inventory_loc=/u01/app/oraInventory2
inst_group=dba

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ ls -ld /u01/app/oraInventory2/
drwxrwx---. 5 oracle dba 4096 Feb 14 06:08 /u01/app/oraInventory2/

++++++++++

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$ ./opatch lsinventory -details -invPtrLoc /u01/app/oraInventory2/oraInst.loc
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/12.2.0.1/ggs03
Central Inventory : /u01/app/oraInventory2
   from           : /u01/app/oraInventory2/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /u01/app/12.2.0.1/ggs03/cfgtoollogs/opatch/opatch2017-02-14_06-09-42AM.log

Lsinventory Output file location : /u01/app/12.2.0.1/ggs03/cfgtoollogs/opatch/lsinv/lsinventory2017-02-14_06-09-42AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle GoldenGate Core                                               12.2.0.0.0
There are 1 products installed in this Oracle Home.


Installed Products (6):

Installer SDK Component                                              11.2.0.3.0
Java Development Kit                                                 1.6.0.65.0
Oracle GoldenGate Core                                               12.2.0.0.0
Oracle GoldenGate for Oracle 12c                                     12.2.0.0.0
Oracle One-Off Patch Installer                                       11.2.0.1.7
Oracle Universal Installer                                           11.2.0.3.0
There are 6 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.
oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs03/OPatch
$

Updated: Database Partitioning with EBS Whitepaper

Steven Chan - Tue, 2017-02-14 02:05

Partitioning allows a single database table and its associated indexes to be broken into smaller components depending on the table and choice of index partitioning methods.  This can sometimes improve performance of large EBS databases significantly. Several E-Business Suite modules take advantage of database partitioning right out of the box, and custom partitioning is also possible.

Our Applications Performance Group has just released a major update to their best-practices white paper on EBS database partitioning:

Partitioning diagram

The new white paper covers:

  • Partitioning fundamentals and references to key partitioning resources
  • Relationship between partitioning and compression
  • Principles for effective partitioning of EBS tables
  • Definitions of seeded vs. custom partitioning
  • Benefits of partitioning
  • Considerations when evaluating partition designs
  • Identifying objects that might be good candidates for partitioning
  • Implementing and testing partitioning
  • And most useful: lots of concrete examples with benchmarks!

Over 90% of this whitepaper has been rewritten, so you should check out this new edition even if you've downloaded an earlier version.

Related Articles

Categories: APPS Blogs

Data Pump or Data Pain Part02 – tablespace/user

Michael Dinh - Mon, 2017-02-13 21:38

This is still all WIP and if you stay with me, I will provide all the sequences for export and import.

Too much info put in one post.

EXP-10 Usernames Which Cannot Be Exported (Doc ID 217135.1)
Did you know from $ORACLE_HOME/rdbms/admin/catexp.sql there is view EXU8USR providing which schemas will not be in full export?

This looks rather similar to column oracle_maintained from dba_users for 12c database.

Schema bold red will not be exported. Trust but verify.

oracle@arrow1:HAWKA:/media/sf_working/datapump
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 13 18:28:51 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning and Real Application Testing options

ARROW1:(SYS@HAWKA):PRIMARY> set lines 120 tab off trimsp on pages 1000
ARROW1:(SYS@HAWKA):PRIMARY> col name for a55
ARROW1:(SYS@HAWKA):PRIMARY> select username from dba_users order by 1;

USERNAME
------------------------------
APPQOSSYS
DBSNMP
DEMO
DIP
GGS_ADMIN
ORACLE_OCM
OUTLN
SYS
SYSTEM

9 rows selected.

ARROW1:(SYS@HAWKA):PRIMARY> select name from exu8usr order by 1;

NAME
-------------------------------------------------------
DEMO
GGS_ADMIN
OUTLN
SYS
SYSTEM

ARROW1:(SYS@HAWKA):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning and Real Application Testing options
oracle@arrow1:HAWKA:/media/sf_working/datapump
$

First step is to pre-create tablespaces.

$ cat impdp_full01_tbs.par

directory=DATA_PUMP_DIR
userid="/ as sysdba"
metrics=Y
dumpfile=full.dmp
logfile=impdp_full_tbs.log
include=TABLESPACE
sqlfile=tbs.sql

$ impdp parfile=impdp_full01_tbs.par

Import: Release 11.2.0.4.0 - Production on Mon Feb 13 18:46:56 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning and Real Application Testing options
Startup took 0 seconds
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_full01_tbs.par
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Mon Feb 13 18:46:57 2017 elapsed 0 00:00:01

$ cat /u01/app/oracle/product/11.2.0.4/db_1/rdbms/log/tbs.sql

See how the filename is hard coded even when OMF is being used.

-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS" DATAFILE
  SIZE 268435456
  AUTOEXTEND ON NEXT 268435456 MAXSIZE 8192M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 1610612736
  AUTOEXTEND ON NEXT 268435456 MAXSIZE 8192M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 135266304
  AUTOEXTEND ON NEXT 134217728 MAXSIZE 8193M,
  SIZE 16777216,
  SIZE 10485760
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '/oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf' RESIZE 16785408;

CREATE TABLESPACE "GGS_DATA" DATAFILE
  SIZE 269484032
  AUTOEXTEND ON NEXT 268435456 MAXSIZE 16385M
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576 DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Basically datafile 5 was created with 16777216 and resized to 16785408.
Why did datapump not use the current size vs original size?
I know what you are probably thinking, why not just create SQL script to do the work.
True but it’s like buying a Mercedes-Benz and having to roll down the windows by hand. (Dating myself).

ARROW1:(SYS@HAWKA):PRIMARY> select file#,name,bytes from v$datafile where name like '%user%';

     FILE# NAME                                                         BYTES
---------- ------------------------------------------------------- ----------
         4 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf        135266304
         5 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf         16785408
         7 /oradata/HAWKA/datafile/o1_mf_users_db4vw289_.dbf         10485760

ARROW1:(SYS@HAWKA):PRIMARY>

SYSTEM and SYSAUX tablespaces are not exported while UNDO and TEMP are.
Hopefully, the following was performed to get all the details from DB creation.
alter database backup controlfile to trace as ‘/tmp/cf_@.sql’ reuse resetlogs;
select property_name,property_value from DATABASE_PROPERTIES;

Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 1 seconds

+++++++++
     
ARROW1:(SYS@HAWKA):PRIMARY> select name from v$tablespace order by 1;

NAME
-------------------------------------------------------
GGS_DATA
SYSAUX
SYSTEM
TEMP
UNDOTBS
USERS

6 rows selected.

ARROW1:(SYS@HAWKA):PRIMARY>     

Last but not least, did you know you can create database in achivelog mode to begin with versus having to enable ARCHIVELOG mode after the fact?

Take a look at my post below.
OTN Appreciation Day : Create Database Using SQL | Thinking Out Loud Blog

ARCHIVELOG
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
SET TIME_ZONE=’US/Mountain’


How to print row value involved in merge statement

Tom Kyte - Mon, 2017-02-13 16:26
Hi, By using dbms_output.put_line( sql%rowcount) we can get the number of rows affected by merge statement. Is it possible to print value of those rows during merge using PL/sql? thanks, Amiya.
Categories: DBA Blogs

Iinconsistent size of tablespace.

Tom Kyte - Mon, 2017-02-13 16:26
Hello, I have two tables: 1st - the empty table t1 (partitioned by date) on the server s1, 2nd - the source table t2 (partitioned by date) on the server s2. Both tables t1 and t2 has the same structure. I want to copy t1 from t2 partition ...
Categories: DBA Blogs

Deduplication of rows with null values

Tom Kyte - Mon, 2017-02-13 16:26
<code>drop table test purge; create table test (ssn int, address varchar2(10), tax_id varchar2(10), identify int); insert into test values(100, '','', null); insert into test values(111, '','', null); insert into test values(111, 'xyz','', nu...
Categories: DBA Blogs

How to get the count of consecutive transactions

Tom Kyte - Mon, 2017-02-13 16:26
Team, Good Monday Morning!!!!!!!! First let me share the sample table and insert scripts for your reference. Table creation script : CREATE TABLE st_lo_trans(customer_name VARCHAR2(10),loan_code NUMBER,loan_tran_code NUMBER,tran_id VARCHA...
Categories: DBA Blogs

Most of the rows are going to one single partition

Tom Kyte - Mon, 2017-02-13 16:26
Hi, We have hash partitioned table with 32 partitions and table has 25 million records.Most of inserted records are going one single partitions in this case 17 million are going to one partition.Is there way this can be controlled so that rows are...
Categories: DBA Blogs

partition's drop

Tom Kyte - Mon, 2017-02-13 16:26
i am creating on partition table with help of the interval (numtoymintervel) then create the 4 partitions. Now i am drop the table without drop the partition's. then i will check in user_tab_partitions. that 4 partitions are creating with systable...
Categories: DBA Blogs

Why we use alter database open resetlogs after flashback database to guaranteed restore point

Tom Kyte - Mon, 2017-02-13 16:26
Hi Team, Below how i created a guaranteed restore point. ========================================================================================= SQL> select name,database_role,open_mode,flashback_on,log_mode from v$database; NAME D...
Categories: DBA Blogs

How to avoid materialized views having staleness="IMPORT" after importing schema by expdp/impdp?

Tom Kyte - Mon, 2017-02-13 16:26
After importing schema by expdp/impdp, I have materialized views with staleness set to "IMPORT". That is, complete refresh is needed to fast refresh is needed later on. However, those materialized views (as star schema commonly seen in datamart) take...
Categories: DBA Blogs

How to extract substrings enclosed within double quotes from a string

Tom Kyte - Mon, 2017-02-13 16:26
I have a requirement to extract column names (there could be multiple instances) from a SQL string. The column names will always be enclosed within double quotes. e.g. "le code" = 802 AND "principal amount" > 1200 OR "branch id" = 'ABC' <b>Req...
Categories: DBA Blogs

ORA-01858

Tom Kyte - Mon, 2017-02-13 16:26
I'm using Apple's macbook pro and I downloaded Virtualbox in order to work with Oracle 11g. I installed Windows 7 in Virtualbox. I wrote my queries in sql*plus. I have a problem with single quotes. Even the query is work in the classroom, it always g...
Categories: DBA Blogs

Ford Motor Company Chooses Oracle HCM Cloud Suite

Oracle Press Releases - Mon, 2017-02-13 16:20
Press Release
Ford Motor Company Chooses Oracle HCM Cloud Suite Ford selects Oracle HCM Cloud to manage team experience

Redwood Shores, Calif.—Feb 13, 2017

Oracle announced today that Ford Motor Company, has selected Oracle’s HCM Cloud Suite towards a more streamlined, digital view of HR product and services to help improve the employee and manager experience. Cloud services are a key enabler for HR transformations.

Traditionally, businesses have used multiple platforms to manage their HR needs. Today, the cloud has changed this as companies seek a new suite of services that can be integrated with existing technology. The Oracle HCM Cloud Suite, allows users to support large-scale Global HR, reporting and security requirements and provides Global HR, Payroll (U.S.), Workforce Compensation, Goal and Performance Management, Talent Review and Succession, and Recruiting and Onboarding. For HR, it is equally important to ensure both an international and security conscious portfolio of services that can capitalize on universal reporting and data analytics to anticipate and plan for driving business and recruiting needs.

Oracle’s SaaS and PaaS systems are robust, yet flexible enough to address these needs. The cloud offers a single platform to access employee records and information while also helping HR professionals cut costs and save time so they can focus on more strategic decision-making and less on paperwork. The Oracle Cloud Customer Connect Program is a unique offering for customers to connect with peers to address the unique challenges of their business and learn from others best practices and strategies for moving to the cloud.

“Oracle is extremely excited to be working with Ford and others in the auto industry,” said Gretchen Alarcon, group vice president of HCM Product Strategy for Oracle. “We work hard to listen and deliver modern, unified cloud solutions that get at the heart of what will ultimately offer an all-encompassing service to save a business money and offer a strategic approach to business practices. We want to make sure this technology ultimately adds to any company’s move toward the future. But even more so, we want to let our customers know that we offer our network and collaboration as a means to help during this modern transformation.”

Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

Oracle Internet of Things Cloud Applications Enable Organizations to Optimize the Digital Supply Chain with Improved Visibility and Predictive Insights

Oracle Press Releases - Mon, 2017-02-13 14:15
Press Release
Oracle Internet of Things Cloud Applications Enable Organizations to Optimize the Digital Supply Chain with Improved Visibility and Predictive Insights IoT predictive analytics automate business processes and operations across the supply chain to enhance the customer experience

Oracle Modern Supply Chain Experience, San Jose, Calif.—Feb 13, 2017

Oracle announced that it is expanding its Internet of Things (IoT) portfolio with four new cloud solutions to help businesses fully utilize the benefits of digital supply chains. These solutions enable businesses to detect, analyze, and respond to IoT signals and incorporate these insights into existing and rapidly evolving market capabilities.

Signals from IoT-enabled devices are growing exponentially and represent an opportunity for organizations that are able to manage, interpret, and leverage these vast sources of data. Organizations with the tools to integrate device data into business processes and applications can gain critical predictive insights and drive cost-effective actions. IoT solutions enable businesses to deliver innovative new services faster and with less risk to their customers.

“The world is becoming ever-more digital and IoT is the next step on that journey—delivering better customer experiences and helping organizations achieve a competitive advantage," said Bhagat Nainani, group vice president of IoT applications development at Oracle. “Oracle IoT Applications enable businesses to leverage connected devices and equipment to modernize the digital supply chain.”

Oracle IoT Applications support three important functions: device registration and data collection; signal analysis and interpretation; and connection to specific business processes. By applying advanced, predictive analytics to the device signals, Oracle IoT Applications calculate complex business-specific KPIs and trigger automated actions in real time. The solutions can deliver capabilities, such as predictive maintenance, diagnostic dashboards, and increased real-time visibility, and can be applied to manufacturing, quality, and product data, as well as logistics and employee movements.

The new cloud applications integrate with Oracle Supply Chain Management (SCM) Cloud Applications to make IoT signals actionable and include the following products:

  • IoT Asset Monitoring Cloud: Monitors assets, utilization, availability, and data from connected sensors and creates incidents in the backend SCM, ERP, or Service Clouds to automate the workflows
  • IoT Connected Worker Cloud: Tracks employees to support safety, service, and regulatory compliance initiatives
  • IoT Fleet Monitoring Cloud: Monitors position and progress of passenger, service, and delivery vehicles and driver-behavior
  • IoT Production Monitoring Cloud: Monitors production equipment to assess and predict manufacturing issues

These aforementioned IoT Cloud applications are built on the Oracle IoT Cloud and Oracle Big Data Cloud. They integrate with Oracle SCM Cloud and Oracle Service Cloud, as well as on-premises software, to enable operational visibility, and facilitate better customer service and responsiveness.

“Noble Plastics is an innovative custom injection molding manufacturer that leverages robotics for automation. We see tremendous potential in IoT to drive predictive maintenance of industrial robots,” said Scott Rogers, Technical Director, Noble Plastics. “Oracle IoT Asset Monitoring Cloud will help us in monitoring FANUC robots that are deployed on our plant floors and alert technicians proactively on their mobile device. This will enhance product quality, production efficiency and throughput while controlling costs.”

For additional information, visit Oracle Cloud and connect with Oracle SCM Cloud and Oracle IoT Cloud on Twitter.

 
Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

Where do you specify the Date Format Mask

Dimitri Gielis - Mon, 2017-02-13 10:32

When reviewing Oracle APEX applications I often see hardcoded date or timestamp formats.
You can define your date formats in multiple places in your application. In your item or column attributes, as part of your code e.g.TO_CHAR(sysdate, ‘DD-MON-YYYY HH24:MI’) or if you want to make it more reusable you might create a substitution string. That will all work, but you can make your life easier and for the ones looking or needing to maintain your code…

APEX itself provides in the Globalization attributes (Shared Components) a place where you can define your default date and format masks for your entire applications. I consider this a best practice to use those fields, as it’s defined in one logical place, so when you need to change your format, you do it once and you’re done. In your custom SQL and PL/SQL code you can also reference those format masks by predefined substitution strings:

  • APP_NLS_DATE_FORMAT
  • APP_DATE_TIME_FORMAT
  • APP_NLS_TIMESTAMP_FORMAT
  • APP_NLS_TIMESTAMP_TZ_FORMAT

e.g. TO_CHAR(sysdate, :APP_NLS_DATE_FORMAT)

Here’s a screenshot which shows which substitution string corresponds with which field:

Application Attributes - Globalization

You can define the format mask you want, or you can click the arrow to see most used format masks represented with an example. To make it a bit easier, I put the format mask (in red) next to it, so you see the underlying format mask more easily:

Possible date format masks defined in the pop-up

If you need to make the format mask dynamic, for example using different format masks for different language, APEX doesn’t allow you to translate that substitution string through Text Messages, but you can work around it by using your own substitution string and have that dynamically filled. In the Globalization Attributes you would add instead of a hardcoded format mask your own substitution string e.g. &MY_TRANSLATED_DATE.FORMAT.

Categories: Development

Kafka Streams and NodeJS – Consuming and periodically reporting in Node.JS on the results from a Kafka Streams streaming analytics application

Amis Blog - Mon, 2017-02-13 09:22

In several previous articles on Apache Kafka, Kafka Streams and Node.JS for interacting with Apache Kafka, I have described how to create a Node.JS application that publishes messages to a Kafka Topic (based on entries in a CSV file), how to create a simple Kafka Streams Java application that processes such messages from that Topic and how to extend that Java application to produce a running Top-N aggregation from that Topic. In this article, I want to discuss a Node application that consumes the Top-N reports from the Kafka Topic produced to by the Kafka Streams application and periodically (once every X seconds) reports on the current standings.

image

The sources for this article are in this GitHub Repo: https://github.com/lucasjellema/kafka-streams-running-topN.

The Node application uses the npm module kafka-node (https://www.npmjs.com/package/kafka-node) for the interaction with Kafka.

A new Client is created – based on the ZooKeeper connect string (ubuntu:2181/). Using the Client, a Consumer is constructed. The consumer is configured to consume from Topic Top3CountrySizePerContinent. A message handler is associated with the consumer, to handle messages on the topic.

The messages consumed by the Node consumer have the following structure:

{"topic":"Top3CountrySizePerContinent"
,"value":"{\"nrs\":[{\"code\":\"DZ\",\"name\":\"Algeria\",\"population\":40263711,\"size\":2381741,\"continent\":\"Africa\"},{\"code\":\"CD\",\"name\":\"Democratic Republic of the Congo\",\"population\":81331050,\"size\":2344858,\"continent\":\"Africa\"},{\"code\":\"SD\",\"name\":\"Sudan\",\"population\":36729501,\"size\":1861484,\"continent\":\"Africa\"},null]}"
,"offset":244
,"partition":0
,"key":{"type":"Buffer","data":[65,102,114,105,99,97]}
}

The key of the message is of type buffer. We happen to know the key is actually a String (the name of the continent). We can extract the key like this:

var continent = new Buffer(countryMessage.key).toString(‘ascii’);

The payload of the message – the top3 for the continent – is in the value property. It can be extracted easily:

var top3 = JSON.parse(countryMessage.value);

{"nrs":
  [
   {"code":"BS","name":"Bahamas","population":327316,"size":13880,"continent":"North America"}
  ,{"code":"AG","name":"Antigua and Barbuda","population":93581,"size":443,"continent":"North America"}
  ,{"code":"AW","name":"Aruba","population":113648,"size":180,"continent":"North America"}
  ,null
  ]
}

The object countrySizeStandings contains a property for each continent. The property is set equal to the top3 that was most recently consumed from the Kafka Topic Top3CountrySizePerContinent.

countrySizeStandings[continent]=top3;

Using the Node built in setInterval() the report() function is scheduled for execution every reportingIntervalInSecs seconds. This function writes the current data in countrySizeStandings to the console.

 

/*

This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.

This program reports: top 3 largest countries per continent (periodically, with a configurable interval) 
*/


var kafka = require('kafka-node')
var Consumer = kafka.Consumer
var client = new kafka.Client("ubuntu:2181/")

var countriesTopic = "Top3CountrySizePerContinent";
var reportingIntervalInSecs = 4;

var consumer = new Consumer(
  client,
  [],
  {fromOffset: true}
);

consumer.on('message', function (message) {
  handleCountryMessage(message);
});

consumer.addTopics([
  { topic: countriesTopic, partition: 0, offset: 0}
], () => console.log("topic "+countriesTopic+" added to consumer for listening"));

var countrySizeStandings = {}; // the global container for the most recent country size standings 

function handleCountryMessage(countryMessage) {
    var top3 = JSON.parse(countryMessage.value);
    // extract key value from the Kafka message
    var continent = new Buffer(countryMessage.key).toString('ascii');
    // record the top3 for the continent indicated by the message key as current standing in the countrySizeStandings object
    countrySizeStandings[continent]=top3;
}// handleCountryMessage

// every reportingIntervalInSecs seconds, report on the current standings per continent
function report() {
   var d = new Date();
   console.log("Report at "+ d.getHours()+":"+d.getMinutes()+ ":"+d.getSeconds());
   // loop over the keys (properties) in the countrySizeStandings map (object)
   for (var continent in countrySizeStandings) {
     if (countrySizeStandings.hasOwnProperty(continent)) {
        var line = continent+ ": ";
        var index = 1;
        countrySizeStandings[continent].nrs.forEach(function(c) {
          if (c) {
            line = line + (index++) +'. '+ c.name+ '('+c.size+'), ';
          }
        });
        console.log(line);
    }//if
  }//for
}//report

// schedule execution of function report at the indicated interval
setInterval(report, reportingIntervalInSecs*1000);

 

Running the end to end chain requires a running Kafka Cluster and the running of the Node application to produce the country messages from the CSV file, the Kafka Streams Java application to derive the running Top 3 standings and finally the Node application introduced in this article to consume the Top 3 standings and report them to the console (as instructed in the ReadMe in the GitHub Repo):

  • node KafkaCountryProducer.js
  • java -cp target/Kafka-Streams-Country-TopN-1.0-SNAPSHOT.jar;target/dependency/* nl.amis.streams.countries.App
  • node KafkaCountryStreamsConsumer.js

The CountryProducer.js Node application writes the messages it produced to Kafka to the console as well:

SNAGHTML1c22841

The Kafka-Streams-Country-TopN Java application also writes its streaming analytic findings to the console:

SNAGHTML1c325c1

The outcome of the Kafka Streams analysis – as published to the Kafka Topic – is consumed by the Node application, continuously, and reported to the console, periodically (once every 30 seconds), updated with the latest findings:

image

The post Kafka Streams and NodeJS – Consuming and periodically reporting in Node.JS on the results from a Kafka Streams streaming analytics application appeared first on AMIS Oracle and Java Blog.

Band Join 12c

Jonathan Lewis - Mon, 2017-02-13 07:53

One of the optimizer enhancements that appeared in 12.2 for SQL is the “band join”. that makes certain types of merge join much more  efficient.  Consider the following query (I’ll supply the SQL to create the demonstration at the end of the posting) which joins two tables of 10,000 rows each using a “between” predicate on a column which (just to make it easy to understand the size of the result set)  happens to be unique with sequential values though there’s no index or constraint in place:

select
        t1.v1, t2.v1
from
        t1, t2
where
        t2.id between t1.id - 1
                  and t1.id + 2
;

This query returns nearly 40,000 rows. Except for the values at the extreme ends of the range each of the 10,000 rows in t2 will join to 4 rows in t1 thanks to the simple sequential nature of the data. In 12.2 the query, with rowsource execution stats enabled, completed in 1.48 seconds. In 12.1.0.2 the query, with rowsource execution stats OFF, took a little over 14 seconds. (With rowsource execution stats enabled it took 12.1.0.2 a little over 1 minute to return the first 5% of the data – I didn’t bother to wait for the rest, though the rate would have improved over time.)

Here are the two execution plans – spot the critical difference:


12.1.0.2
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |    25M|   715M|  1058  (96)| 00:00:01 |
|   1 |  MERGE JOIN          |      |    25M|   715M|  1058  (96)| 00:00:01 |
|   2 |   SORT JOIN          |      | 10000 |   146K|    29  (11)| 00:00:01 |
|   3 |    TABLE ACCESS FULL | T1   | 10000 |   146K|    27   (4)| 00:00:01 |
|*  4 |   FILTER             |      |       |       |            |          |
|*  5 |    SORT JOIN         |      | 10000 |   146K|    29  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   | 10000 |   146K|    27   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ID"<="T1"."ID"+2) 5 - access("T2"."ID">="T1"."ID"-1)
       filter("T2"."ID">="T1"."ID"-1)

12.2.0.1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 40000 |  1171K|    54  (12)| 00:00:01 |
|   1 |  MERGE JOIN         |      | 40000 |  1171K|    54  (12)| 00:00:01 |
|   2 |   SORT JOIN         |      | 10000 |   146K|    27  (12)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   146K|    25   (4)| 00:00:01 |
|*  4 |   SORT JOIN         |      | 10000 |   146K|    27  (12)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T2   | 10000 |   146K|    25   (4)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."ID">="T1"."ID"-1)
       filter("T2"."ID"<="T1"."ID"+2 AND "T2"."ID">="T1"."ID"-1)

Notice how operation 4, the FILTER, that appeared in 12.1 has disappeared in 12.2 and the filter predicate that it used to hold is now part of the filter predicate of the SORT JOIN that has been promoted to operation 4 in the new plan.

As a reminder – the MERGE JOIN operates as follows: for each row returned by the SORT JOIN at operation 2 it calls operation 4. In 12.1 this example will then call operation 5 so the SORT JOIN there happens 10,000 times. It’s important to know, though, that the name of the operation is misleading; what’s really happening is that Oracle is “probing a sorted result set in local memory” 10,000 times – it’s only on the first probe that it finds it has to call operation 6 to read and move the data into local memory in sorted order.

So in 12.1 operation 5 probes (accesses) the in-memory data set starting at the point where t2.id >= t1.id – 1; I believe there’s an optimisation here because Oracle will recall where it started the probe last time and resume searching from that point; having found the first point in the in-memory set where the access predicate it true Oracle will walk through the list passing each row back to the FILTER operation as long as the access predicate is still true, and it will be true right up until the end of the list. As each row arrives at the FILTER operation Oracle checks to see if the filter predicate there is true and passes the row up to the MERGE JOIN operation if it is. We know that on each cycle the FILTER operation will start returning false after receiving 4 rows from SORT JOIN operation – Oracle doesn’t.  On average the SORT JOIN operation will send 5,000 rows to the FILTER operation (for a total of 50,000,000 values passed and discarded).

In 12.2, and for the special case here where the join predicate uses constants to define the range, Oracle has re-engineered the code to eliminate the FILTER operation and to test both parts of the between clause in the same subroutine it uses to probe and scan the rowsource. In 12.2 the SORT JOIN operation will pass 4 rows up to the MERGE JOIN operation and stop scanning on the fifth row it reaches. In my examples that’s an enormous (CPU) saving in subroutine calls and redundant tests.

Footnote:

This “band-join” mechanism only applies when the range is defined by constants (whether literal or bind variable). It doesn’t work with predicates like (e.g.):

where t2.id between t1.id - t1.step_back and t1.id + t1.step_forward

The astonishing difference in performance due to enabling rowsource execution statistics is basically due to the number of subroutine calls eliminated – I believe (subject to a hidden parameter that controls a “sampling frequency”) that Oracle will call the O/S clock twice each time it calls the SORT JOIN operation from the FILTER operation to acquire the next row. In 12.1 we’re doing 50M calls redundant calls to SORT JOIN.

The dramatic difference in performance even when rowsource execution statistics isn’t enabled is probably something you won’t see very often in a production system – after all, I engineered a fairly extreme data set and query for the purposes of demonstration. Note, however, the band join does introduce a change in cost, so it’s possible that on the upgrade you may find a few cases where the optimizer will switch from a nested loop join to a merge join using a band-join.


Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics

Oracle Press Releases - Mon, 2017-02-13 07:00
Press Release
Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics Enhances Oracle Cloud Platform Portfolio with Oracle Data Integrator Cloud Service

Redwood Shores, Calif.—Feb 13, 2017

Oracle today expanded Oracle Cloud Platform’s data integration offerings with the launch of Oracle Data Integrator Cloud. This new cloud service significantly simplifies and accelerates cross-enterprise data integration to support real-time analytics that help organizations drive better business decisions. 

In today’s information driven economy, data is a fundamental asset to most businesses. As more and more data moves to the cloud, getting information and insight to the right people and the right applications at the right time becomes progressively more difficult. With the introduction today of the Oracle Data Integrator Cloud, organizations can improve their agility by deploying projects more quickly, reduce risk with an open, non-proprietary technology, and reduce costs with better productivity. 

“To be effective and agile, enterprises need seamless communication and flow of data between sources and targets - data originating from IoT, Web, and business applications or data that is stored in the cloud or on premises,” said Jeff Pollock, vice president of product management, Oracle. “Oracle Data Integrator Cloud provides businesses with a high-performance, simple, and integrated cloud service to execute data transformations where the data lies, with no hand coding required, and without having to copy data unnecessarily.”

Easy to use and integrate, Oracle Data Integrator Cloud helps organizations improve productivity, reduce development costs, and lower total cost of ownership by facilitating better data movement and transformation between Oracle and non-Oracle systems, data sources, and applications. It offers a flow-based declarative user interface along with release management capabilities that allow customers to improve productivity and better manage their code, as well as their development, testing and production environments. Oracle Data Integrator Cloud’s high performance architecture, with its E-LT capabilities and advanced parallelism options enable faster, more efficient loading and transformation for data marts, data warehouses, and big data systems.

Oracle Data Integrator Cloud is fully integrated with Oracle’s PaaS offerings, including Oracle Database Cloud, Oracle Database Exadata Cloud, and Oracle Big Data Cloud. Oracle also delivers pre-built integration for non-Oracle solutions, allowing users to seamlessly switch between underlying Big Data technologies such as Hive, HDFS, HBase, and Sqoop.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

UNION vs UNION ALL: What’s The Difference?

Complete IT Professional - Mon, 2017-02-13 05:00
What’s the difference between UNION and UNION ALL in Oracle SQL? There are a few. Learn what they are in this article. What Is UNION and UNION ALL? First of all, let’s explain what they are. UNION and UNION ALL are both “set operators”. They are keywords you can use to combine two sets of […]
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator