Feed aggregator

New Product Launch: Oracle Database Programming Interface for C (ODPI-C)

Christopher Jones - Mon, 2017-01-30 20:36

Today Oracle released a great new GitHub project - Oracle Database Programming Interface for C. It sits on top of OCI and offers an alternative programming experience.

ODPI-C is a C library that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. ODPI-C Goal

ODPI-C's goal is to expose common OCI functionality in a readily consumable way to the C or C++ developer. OCI's API is extremely flexible and is highly efficient. It gives a lot of fine-grained control to the developer and has a very wide range of use cases. ODPI-C is also flexible but is aimed primarily at language driver creators. These creators are programming within the confines of a scripting language's type system and semantics. The languages often expose simplified data access to users through cross-platform, 'common-denominator' APIs. Therefore it makes sense for ODPI-C to provide easy to use functionality for common data access, while still allowing the power of Oracle Database to be used.

Of course ODPI-C isn't just restricted to driver usage. If ODPI-C has the functionality you need for accessing Oracle Database, you can add it to your own custom projects.

ODPI-C is a refactored and greatly enhanced version of the "DPI" data access layer used in our very successful node-oracledb driver.

Releasing ODPI-C as a new, standalone project means its code can be consumed and reused more easily. For database drivers it allows Oracle features to be exposed more rapidly and in a consistent way. This will allow greater cross-language driver feature compatibility, which is always useful in today's multi-language world.

ODPI-C Features

Oracle's Anthony Tuininga has been leading the ODPI-C effort, making full use of his extensive driver knowledge as creator and maintainer of the extremely popular, and full featured, Python cx_Oracle driver.

The ODPI-C feature list currently includes all the normal calls you'd expect to manage connections and to execute SQL and PL/SQL efficiently. It also has such gems as SQL and PL/SQL object support, scrollable cursors, Advanced Queuing, and Continuous Query Notification. The full list in this initial Beta release, in no particular order, is:

  • 11.2, 12.1 and 12.2 Oracle Client support

  • SQL and PL/SQL execution

  • REF cursors

  • Large objects (CLOB, NCLOB, BLOB, BFILE)

  • Timestamps (naive, with time zone, with local time zone)

  • JSON objects

  • PL/SQL arrays (index-by integer tables)

  • Objects (types in SQL, records in PL/SQL)

  • Array fetch

  • Array bind/execute

  • Array DML Row Counts

  • Standalone connections

  • Connections via Session pools (homogeneous and non-homogeneous)

  • Session Tagging in session pools

  • Database Resident Connection Pooling (DRCP)

  • Connection Validation (when acquired from session pool or DRCP)

  • Proxy authentication

  • External authentication

  • Statement caching (with tagging)

  • Scrollable cursors

  • DML RETURNING clause

  • Privileged connection support (SYSDBA, SYSOPER, SYSASM, PRELIM_AUTH)

  • Database Startup/Shutdown

  • End-to-end tracing, mid-tier authentication and auditing (action, module, client identifier, client info, database operation)

  • Batch Errors

  • Query Result Caching

  • Application Continuity (with some limitations)

  • Query Metadata

  • Password Change

  • OCI Client Version and Server Version

  • Implicit Result Sets

  • Continuous Query Notification

  • Advanced Queuing

  • Edition Based Redefinition

  • Two Phase Commit

In case you want to access other OCI calls without having to modify ODPI-C code, there is a call to get the underlying OCI service context handle.

ODPI-C applications can make full advantage of OCI features which don't require API access, such as the oraaccess.xml configuration for enabling statement cache auto-tuning. Similarly, Oracle Database features controlled by SQL and PL/SQL, such as partitioning, can be used in applications, as you would expect.

Communication to the database is handled by Oracle Net, so features such as encrypted communication and LDAP can be configured.

ODPI-C's API makes memory and resource management simpler, particularly for 'binding' and 'defining'. A reference counting mechanism adds resiliency by stopping applications destroying in-use OCI resources. To offer an alternative programming experience from OCI, the ODPI-C API uses a multiple getter/setter model for handling attributes.

Using ODPI-C

ODPI-C is released as source-code on GitHub. The code makes OCI calls and so requires an Oracle client, which must be installed separately. Version 11.2 or later of the client is required. This allows applications to connect to Oracle Database 9.2 or later. The free Oracle Instant Client is the standard way to obtain standalone Oracle client libraries and header files.

The project is licensed under the Apache 2.0 and/or the Oracle UPL licenses, so the code is readily available for adoption into your own projects.

ODPI-C code can be included in your C or C++ applications and compiled like any OCI application. Or if you want to use ODPI-C as a shared library, a sample Makefile for building on Linux, OS X and Windows is provided

Support for ODPI-C is via logging GitHub Issues only - but this does have the advantage of giving you direct access to ODPI-C developers. Also remember the underlying OCI libraries (which do all the hard work) are extremely widely used, tested and supported.

If you want to do more than view the code, you can build ODPI-C as a library using the sample Makefile, and then build the current standalone sample programs. These show a number of ODPI-C features.

ODPI-C Plans

The ODPI-C release today is 2.0.0-beta.1, indicating we're happy with the general design but want to get your wider review. We also need to complete some testing and add some polish.

We aim to stabilize ODPI-C relatively quickly and then continue adding functionality, such as support for the new Oracle Database 12.2 Sharding feature.

Future Node.js node-oracledb and Python cx_Oracle drivers will use ODPI-C. There is active work on these updates.

I know Kubo Takehiro, who does a fantastic job maintaining the ruby-oci8 driver, has been keen to see what ODPI-C can do for his driver. I look forward to seeing how he uses it.

I think you'll be pleased with the direction and plans for scripting languages in 2017.

We really welcome your feedback on this big step forward.

ODPI-C References

Home page is: https://oracle.github.io/odpi/

Code is at https://github.com/oracle/odpi

Documentation is at https://oracle.github.io/odpi/doc/index.html

Issues and comments are be reported at https://github.com/oracle/odpi/issues

Partner Webcast – Docker Agility in Cloud: Introducing Oracle Container Cloud Service

In modern IT «containers» is not simply a buzzword, this is a proven way of improving developers’ productivity and minimizing deployment costs. It allows to quickly create ready-to-run packaged...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Join Rittman Mead at the 2017 BIWA Summit!

Rittman Mead Consulting - Mon, 2017-01-30 15:46
Join Rittman Mead at the 2017 BIWA Summit!

We invite you to come join us at the annual 2017 BIWA Summit.

Join Rittman Mead at the 2017 BIWA Summit!

This year we are proud to announce that Robin Moffatt, Head of Research and Development, will be presenting on:

Analysing the Panama Papers with Oracle Big Data Spatial and Graph

January 31, 2017 | 3:45 pm – 4:15 pm | Room 103

Oracle Big Data Spatial and Graph enables the analysis of datasets beyond that of standard relational analytics commonly used. Through Graph technology relationships can be identified that may not otherwise have been. This has practical uses including in product recommendations, social network analysis, and fraud detection. In this presentation we will see a practical demonstration of Oracle Big Data Spatial and Graph to load and analyse the “Panama Papers” dataset. Graph algorithms will be utilised to identify key actors and organisations within the data, and patterns of relationships shown. This practical example of using the tool will give attendees a clear idea of the functionality of the tool and how it could be used within their own organisation. If Oracle Database 12cR2 on-premise is available by the time of this presentation, then its new property graph capabilities will also be covered here. The presentation will be based on a paper published on OTN: https://community.oracle.com/docs/DOC-1006400


Kafka’s Role in Implementing Oracle’s Big Data Reference Architecture on the Big Data Appliance

February 1, 2017 | 2:20 pm – 3:10 pm | Room 102

Big Data … Big Mess? Everyone wants Big Data, but without a good platform design up front there is the risk of a mess of point-to-point feeds. The solution to this is Apache Kafka, which enables stream or batch consumption of the data by multiple consumers. Implemented as part of Oracle’s Big Data Architecture on the Big Data Appliance, it acts as a data bus for the enterprise to both the data reservoir and discovery lab. This presentation will introduce the basics of Kafka, and explain how it fits within the Big Data Architecture. We’ll then see it used with Oracle GoldenGate to stream data into the data reservoir, as well as ad hoc population of discovery lab environments and microservices such as Flume, HBase, and Elasticsearch.

(Still) No Silver Bullets: OBIEE 12c Performance in the Real World

February 2, 2017 | 1:30 pm – 2:20 pm | Room 203

Are you involved in the design and development of OBIEE systems and want to know the best way to go about ensuring good performance? Maybe you’ve an existing OBIEE system with performance “challenges” that you need to diagnose? This presentation looks at the practical elements of diagnosing the causes of performance issues in OBIEE, and discusses good practices to observe when developing new systems. It includes discussion of OBIEE 12c and with additional emphasis on analysis of Usage Tracking data for the accurate profiling and diagnosis of issues. Why this would appeal to the audience: – Method-R time profiling technique applied to the OBIEE nqquery.log – Large number of the community use OBIEE, many will have their own performance horror stories; fewer will have done a deep dive into analysing the time profile of long-running requests – Performance “right practices” will help those less familiar with performant OBIEE designs, and may prompt debate from those more experienced. As presented previously at OOW, OUGF, UKOUG, OUG Scotland, and POUG. Newly updated for OBIEE 12c. * Video: http://ritt.md/silver-bullets-video* Slides: http://ritt.md/silver-bullets-slides

Categories: BI & Warehousing

12cR1 RAC Posts -- 5 : Relocating OCR and VoteDisk

Hemant K Chitale - Mon, 2017-01-30 09:09
Most default installation guides for Grid Infrastructure will have you creating a DATA Disk Group during the install.  This results in the OCR, the Vote Disk and MGMTDB being created in the same DATA Disk Group.

The proper installation is to have OCR and Vote Disk separated from DATA.  If you create a different Disk Group during the installation, you would have all three components (MGMTDB is new in 12c, did not exist in 11g) and you can have these components properly placed in a non-DATA Disk Group that you create as the default.

But what if they have already been located in DATA ?  Can you relocate them to another Disk Group ?



I start with creating a new Disk Group called OCRVOTE on an existing unused ASM Disk.

[root@collabn1 ~]# su - oracle
[oracle@collabn1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@collabn1 ~]$ sqlplus '/ as sysasm'

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 30 22:34:38 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL>
SQL> set pages600
SQL> col name format a12
SQL> col path format a16
SQL> select name, path, state, group_number, disk_number
2 from v$asm_disk
3 order by 4,5,1
4 /

NAME PATH STATE GROUP_NUMBER DISK_NUMBER
------------ ---------------- -------- ------------ -----------
/dev/asm-disk4 NORMAL 0 0
DATA_0000 /dev/asm-disk1 NORMAL 1 0
DATA_0001 /dev/asm-disk2 NORMAL 1 1
DATA_0002 /dev/asm-disk5 NORMAL 1 2
FRA_0000 /dev/asm-disk3 NORMAL 2 0

SQL>
SQL> create diskgroup OCRVOTE external redundancy disk '/dev/asm-disk4';

Diskgroup created.

SQL>
SQL> select name, path, state, group_number, disk_number
2 from v$asm_disk
3 order by 4,5,1
4 /

NAME PATH STATE GROUP_NUMBER DISK_NUMBER
------------ ---------------- -------- ------------ -----------
DATA_0000 /dev/asm-disk1 NORMAL 1 0
DATA_0001 /dev/asm-disk2 NORMAL 1 1
DATA_0002 /dev/asm-disk5 NORMAL 1 2
FRA_0000 /dev/asm-disk3 NORMAL 2 0
OCRVOTE_0000 /dev/asm-disk4 NORMAL 3 0

SQL>
SQL> alter diskgroup OCRVOTE set attribute 'COMPATIBLE.ASM' = '12.1';

Diskgroup altered.

SQL>


I have identified the ASM Disk on /dev/asm-disk4 as available and created a DiskGroup on it.  I have to mount the DiskGroup on the second node as well.  (If I don't mount the DiskGroup on the second node collabn2, I get errors
"PROT-30: The Oracle Cluster Registry location to be added is not usable
PROC-50: The Oracle Cluster Registry location to be added is inaccessible on nodes collabn2."
when attempting to add the OCR on the DiskGroup).

[oracle@collabn2 trace]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 30 22:52:18 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup OCRVOTE mount;

Diskgroup altered.

SQL> set pages60
SQL> col name format a12
SQL> col path format a16
SQL> select name, path, state, group_number, disk_number
2 from v$asm_disk
3 order by 4,5,1
4 /

NAME PATH STATE GROUP_NUMBER DISK_NUMBER
------------ ---------------- -------- ------------ -----------
DATA_0000 /dev/asm-disk1 NORMAL 1 0
DATA_0001 /dev/asm-disk2 NORMAL 1 1
DATA_0002 /dev/asm-disk5 NORMAL 1 2
FRA_0000 /dev/asm-disk3 NORMAL 2 0
OCRVOTE_0000 /dev/asm-disk4 NORMAL 3 0

SQL>


Next, I plan to relocate the OCR which is currently in the DATA DiskGroup.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[oracle@collabn1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1688
Available space (kbytes) : 407880
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

[oracle@collabn1 ~]$
[oracle@collabn1 ~]$ su
Password:
[root@collabn1 oracle]# ocrconfig -add +OCRVOTE
[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1684
Available space (kbytes) : 407884
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]# ocrconfig -delete +DATA
[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1684
Available space (kbytes) : 407884
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#


I first used OCRCHECK ("ocrcheck" at the commandline) to list the existing OCR location.  I then used root and OCRCONFIG -ADD to add +OCRVOTE as a location and verified it with OCRCHECK.  I then used OCRCONFIG -DELETE ("ocrconfig -add" and "ocrconfig -delete") to delete the old location.

I could have used "ocrconfig -replace" to replace the OCR location but I prefer ADD and DELETE.



How do I relocate the VoteDisk ?  Since I am using a single DiskGroup and External Redundancy with only 1 Failure Group, I have only 1 VoteDisk.

[root@collabn1 oracle]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 oracle]#
[root@collabn1 oracle]# crsctl replace votedisk +OCRVOTE
Successful addition of voting disk a58b8b9d58064fb8bf6df0b3ee701e32.
Successful deletion of voting disk 96fbcb40bfeb4ff7bf18881adcfef149.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4266: Voting file(s) successfully replaced
[root@collabn1 oracle]#
[root@collabn1 oracle]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE a58b8b9d58064fb8bf6df0b3ee701e32 (/dev/asm-disk4) [OCRVOTE]
Located 1 voting disk(s).
[root@collabn1 oracle]#


With ASM as  the location for VoteDisk, I cannot use CRSCTL ADD VOTEDISK  but have to use CRSCTL REPLACE VOTEDISK.

So, I have added a new DiskGroup called OCRVOTE and relocated both OCR and VoteDisk to this new DiskGroup (on disk /dev/asm-disk4).

Can I relocate +MGMTDB ?  I am open to suggestions.

.
.
.

Categories: DBA Blogs

RDX 2017 Top Database Trends Series- Microsoft & Oracle Hybrid DBMS Clouds

Chris Foot - Mon, 2017-01-30 08:22

The database market landscape no longer consists of a handful of traditional database vendor offerings. The database market arena has exploded with dozens of new database offerings and architectures from organizations that range the spectrum. Competition from super-sized, nontraditional database vendors, open source offerings and startups are feeding the rapid escalation of advancements in database technologies.

RDX 2017 Top Database Trends- SQL Server on Linux

Chris Foot - Mon, 2017-01-30 08:18

In this second article of our 2017 Database Trends series, we analyze Microsoft’s SQL Server on Linux offering.   We evaluate the impact the new offering will have on the database market arena and, more specifically, what the consequences will be for the Microsoft vs Oracle war for DBMS market dominance.

RDX 2017 Top Database Trends- Multi-Model Databases & NoSQL Vendor Consolidation

Chris Foot - Mon, 2017-01-30 08:18

In this third installment of our 2017 Database Trends analysis series, we focus our attention on the emergence of multi-model database management systems.  We examine the impact the larger database vendors will have on their smaller, NoSQL competitors.  In addition, we also discuss the premise that the vendor with the best technology doesn’t always win the battle for market share – or even survive as a viable competitor in some cases.

RDX 2017 Top Database Trends- The Rising Interest in Open Source Database Offerings

Chris Foot - Mon, 2017-01-30 08:17

In the final installment of our 2017 Database Trends analysis series, we evaluate the database community’s rising interest in open source database offerings.   We discuss the impact these upstart competitors will have on the industry heavyweights that include Oracle, Microsoft and IBM.

Automation’s Impact on IT Knowledge Workers

Chris Foot - Mon, 2017-01-30 08:16

Process automation, because of its wide range of application, takes many forms. Manufacturing companies have been using industrial robots to replace activities traditionally performed by humans for some time. Business process automation shares the same goal: to replace business functions performed by humans with software applications. Work activities that are repetitive in nature and require little intelligent analysis and decision making to complete are prime candidates for process automation.

ASSM Help

Jonathan Lewis - Mon, 2017-01-30 06:33

I’ve written a couple of articles in the past about the problems of ASSM spending a lot of time trying to find blocks with usable free space. Without doing a bit of rocket science with some x$ objects, or O/S tracing for the relevant calls, or enabling a couple of nasty events, it’s not easy proving that ASSM might be a significant factor in a performance problem – until you get to 12c Release 2 where a staggering number of related statistics appear in v$sysstat.

I’ve published the extract (without explanation) at the end of this note, but here’s just a short extract showing the changes in my session’s ASSM stats due to a little PL/SQL loop inserting one row at a time into an empty table with a single index:

Name                                                                         Value
----                                                                         -----
ASSM gsp:get free block                                                        185
ASSM cbk:blocks examined                                                       185
ASSM gsp:L1 bitmaps examined                                                   187
ASSM gsp:L2 bitmaps examined                                                     2
ASSM gsp:Search hint                                                             2
ASSM gsp:good hint                                                             185

It looks like we’ve checked a couple of “level 2” bitmap blocks (one for the table, one for the index, presumably) to pick a sequence of “level 1” bitmap blocks that have been very good at taking us to a suitable data (table or index) block that can be used.

You might have expected to see numbers more like 10,000 in the output, but remember that PL/SQL has lots of little optimisations built into it and one of those is that it pins a few blocks while the anonymous block is running so it doesn’t have to keep finding blocks for every single row.

In comparison here’s the effect of the same data load when operated at 10,000 separate insert statements called from SQL*Plus:

Name                                                                         Value
----                                                                         -----
ASSM gsp:get free block                                                     10,019
ASSM cbk:blocks examined                                                    10,220
ASSM cbk:blocks marked full                                                    201
ASSM gsp:L1 bitmaps examined                                                10,029
ASSM gsp:L2 bitmaps examined                                                     6
ASSM gsp:L2 bitmap full                                                          1
ASSM gsp:Search all                                                              1
ASSM gsp:Search hint                                                             2
ASSM gsp:Search steal                                                            1
ASSM gsp:bump HWM                                                                1
ASSM gsp:good hint                                                          10,016
ASSM rsv:fill reserve                                                            1

It’s interesting to note that in this case we see (I assume) a few cases where we’ve done the check for an L1 bitmap block, gone to a data blocks that was apparently free, and discovered that our insert would make to over full – hence the 201 “blocks marked full”.

Critically, of course, this is just another of the many little indications of how “client/server” chatter introduces lots of little bits of extra work when compared to the “Thick DB “ approach.

One final set of figures. Going back to an example that first alerted me to the type of performance catastrophes that ASSM could contribute to, I re-ran my test case on 12.2 and checked the ASSM figures reported. The problem was that a switch from a 4KB or 8KB blocks size to a 16KB bblock size produced a performance disaster. A version of my  test case and some timing results are available on Greg Rahn’s site.

In my test case I have 830,000 rows and do an update that sets column2 to column1 changing it from null to an 8-digit value. With a 16KB block size and PCTFREE set to a highly inappropriate value of 10 this is what the ASSM statistics looks like:


Name                                   Value
----                                   -----
ASSM gsp:get free block              668,761
ASSM cbk:blocks examined             671,404
ASSM cbk:blocks marked full            2,643
ASSM gsp:L1 bitmaps examined       1,338,185
ASSM gsp:L2 bitmaps examined         672,413
ASSM gsp:Search all                      332
ASSM gsp:Search hint                 668,760
ASSM gsp:Search steal                    332
ASSM gsp:bump HWM                        332
ASSM wasted db state change          669,395

I’d love to know what the figures would have looked like if they had been available in the original Oracle 9.2.0.8 case; they look fairly harmless in this case even though the database (according to other stats) did something like 10 times the work you might expect.

Even here, though, where the original catastrophic bug has been addressed, the ASSM stats give you an important clue: we’ve been doing a simple update so why have we spent so much effort looking for free space (get free block); even stranger, how come we had to examine 1.3M L1 bitmaps when we’ve only updated 830,000 rows surely the worst case scenario shouldn’t have been worse that 1 to 1; and then there’s that “wasted db state change” – I don’t understand exactly what that last statistic is telling me but when I’m worried about performance I tend to worry about anything that’s being wasted.

In passing – if you want to insert a single row into an unindexed table you can expect Oracle to examine the segment header, then an L2 bitmap block, then an L1 bitmap block to find a data block for the insert. (In rare cases that might be segment header, L3, L2, L1). There are then optimisation strategies for pinning blocks – the session will pin the L1 bitmap block briefly because it may have to check several data blocks it references because they may be full even though they are flagged as having space; similarly the session will pin the L2 bitmap block because it may need to mark an L1 bitmap block as full and check another L1 block. The latter mechanism probably explains why we have examined more L1 bitmaps than L2 bitmaps.

Finally, the full monty

Just a list of all the instance statistics that start with “ASSM”:

ASSM bg: segment fix monitor
ASSM bg:create segment fix task
ASSM bg:mark segment for fix
ASSM bg:slave compress block
ASSM bg:slave fix one segment
ASSM bg:slave fix state
ASSM cbk:blocks accepted
ASSM cbk:blocks examined
ASSM cbk:blocks marked full
ASSM cbk:blocks rejected
ASSM fg: submit segment fix task
ASSM gsp:Alignment unavailable in space res
ASSM gsp:L1 bitmaps examined
ASSM gsp:L2 bitmap full
ASSM gsp:L2 bitmaps examined
ASSM gsp:L3 bitmaps examined
ASSM gsp:Optimized data block rejects
ASSM gsp:Optimized index block rejects
ASSM gsp:Optimized reject DB
ASSM gsp:Optimized reject l1
ASSM gsp:Optimized reject l2
ASSM gsp:Search all
ASSM gsp:Search hint
ASSM gsp:Search steal
ASSM gsp:add extent
ASSM gsp:blocks provided by space reservation
ASSM gsp:blocks rejected by access layer callback
ASSM gsp:blocks requested by space reservation
ASSM gsp:bump HWM
ASSM gsp:get free block
ASSM gsp:get free critical block
ASSM gsp:get free data block
ASSM gsp:get free index block
ASSM gsp:get free lob block
ASSM gsp:good hint
ASSM gsp:reject L1
ASSM gsp:reject L2
ASSM gsp:reject L3
ASSM gsp:reject db
ASSM gsp:space reservation success
ASSM gsp:use space reservation
ASSM rsv:alloc from reserve
ASSM rsv:alloc from reserve fail
ASSM rsv:alloc from reserve succ
ASSM rsv:clear reserve
ASSM rsv:fill reserve
ASSM wasted db state change

Oracle Core factor and Oracle or non-Oracle Cloud

Yann Neuhaus - Mon, 2017-01-30 05:10

Oracle Database licensing can be based on two metrics: Processor or Named User Plus (NUP). NUP also necessitate to count the processors because there is a minimum number of NUP per processor. Then it is very important to know how to count processors. The time where a machine had 1 socket with 1 processor with 1 chip with 1 core running 1 thread is really old past. Today, we have physical machines with multi-core processors, running multiple threads per core. And virtualization allocates part of it to virtual machines, often called vCPU.

Oracle still uses the name ‘processor’ but declines the way of counting them by processor vendor (Core Factor), virtualization solution, and cloud provider.

References

The main document referenced here are:

The Oracle Processor Core Factor Table: http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf which describes how to adjust core count to processor count. It also describes how to count Oracle Cloud CPU (aka OCPU).

The Licensing Oracle Software in the Cloud Computing Environment: http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf which describes how to count Amazon AWS vCPU and Microsoft Azure Cores.

There is no mention of other cloud providers, but they all run virtualization so comes the Oracle Partitioning Policy to know how to count physical vs. virtual CPU: http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf. However, their hosting contract may mention different things.

At the end of those documents, you see a mention that they are ‘for educational purposes only’ so basically there’s no legal truth in them and my blog post being my interpretation on that has no value at all… Your contracts have the truth when it comes to legal interpretation.

So you might wonder what’s the point reading those documents if they have no value? Well, your goal, as an IT manager, is not to go to court and fight with attorneys. If you want to avoid any problems, it is better to read those rules and adhere to them, whether you find them fair or not. Of course, you can also choose to go to other vendors with other models, such as open source. But that’s not the goal of this post. Here I suppose that you like the oracle database technology, as much as I do, and that you are looking for a way to run it at affordable cost without worrying about LMS audits…

Something else about those documents: they change, so you may want to look at history. For example, about Amazon and Azure, the definitions were a bit different a few days ago and the web time machine can help us: http://web.archive.org/web/20161226082104/http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf.

Vendors

Of course, there may be some technical reasons behind core factor, virtualization and cloud factors. But let’s face it, besides the database, Oracle is also a hardware vendor, a virtualization software vendor, and a cloud provider. They are at the top in database market, willing to do the same on the cloud market. Making database cheaper on their products may be a way to achieve that. Customers wanting to stay with Oracle Database will be tempted to go to full Oracle stack.

The combination of core factor and virtualisation rules clearly disadvantages all competitors:

  • IBM LPAR virtualisation is accepted, but IBM POWER core factor makes the database 2x more expensive than on processors you find in Oracle hardware (Intel, SPARC)
  • Most data center run on VMWare ESX with 0.5 core factor Intel, but counting vCPU is not accepted and the whole datacenter may have to be licenced
  • We are allowed to count vCPU on Amazon and Azure virtualization, they run Intel Xeon, but core factor applies only on Oracle Cloud. You need x2 processor licenses on AWS or Azure for the same processor cores
  • Disabling cores from the BIOS is how Capacity on Demand is done on bare metal ODA, but we are not allowed to do the same on similar hardware

So basically, running Oracle Database is cheaper on Oracle Hardware, Oracle virtualization and Oracle Cloud. When you do it on other vendors, you can’t do capacity on demand, and you need more processor licenses to run on same number of CPU.

Enterprise Edition

On the Oracle Cloud, you allocate CPU resources with shape OCPU. One OCPU is the equivalent of one Intel Xeon core with 2 threads. And one processor licences covers 2 OCPUs. This looks like the core factor for Inter Xeon even if it is not directly mentioned (but this equivalence is written in the core factor document). So One processor license covers 2 cores, 4 vCPU with hyper-threading.

On Amazon EC2 and RDS one processor licence covers 1 core. This can be 1 vCPU (T2 and m3.medium have one thread per core) or 2 vCPU (hyper-threaded instances). The AWS document https://aws.amazon.com/ec2/instance-types/ mentions that each vCPU is a hyperthread of an Intel Xeon core except for T2 and m3.medium.

On Microsoft Azure one processor license covers 1 core. This is 1 vCPU as there is no hyper-threading enabled.

23-Jan-2017

I mentioned looking at archive.org version of the documents. Tim Hall has mentioned what has changed on is blog

The previous document was not very clear, mentioning ‘virtual cores’. Actually, ‘virtual cores’ are the number of cores that are behind the vCPU: one for T2 instances (and m3.medium), two for hyper-threaded instances. And this is listed precisely on https://aws.amazon.com/ec2/virtualcores/ for Amazon. It is clear also for Azure as there is no hyper-threading.

But what has changed is that there were previously two mentions of applying the core factor:

  • An example on 8 virtual cores requiring 8 * 0.5 = 4 processor licenses
  • The mention that AWS implements Oracle VM and then follow the rules of hard partitioning and ‘standard processor metric’ which, IMO, includes the core factor

Those two have disappear in the new document, with an explicit mention that When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable. This makes the Oracle Cloud competitors two times more expensive than before for running Oracle Databases. The fun part is that IaaS providers that are not ‘Authorized’, or private clouds, can apply core factor and count vCPUs as long as they virtualize with Oracle VM on Intel Xeon. But hosting companies may have different rules in their contracts.

NUP

When licensing in NUP, the metric is not the number of processor. But you have to count them because you need to buy more than 25 NUP per processor. This means that if you have 25 NUP only (for a development environment with few developers for example) you can run it on 2 cores in the Oracle Public Cloud, but only one core on the other ‘Authorized’ ones.

Standard Edition

Note that the core factor has nothing to do in Standard Edition. Processor metric for Standard Edition is not a core but a socket. On current Intel processors, a socket can run 12, 18, or 24 cores. But maths are different in the Cloud from an Oracle licensing perspective.

In the Oracle Cloud, one ‘socket’ is 4 OCPU which is 4 cores (8 threads). An instance with 1 to 8 vCPU requires 1 SE2 processor license. 9 to 16 vCPU requires 2 SE processor licences and I think it is the maximum you can have on Oracle Cloud. So the idea is that one socket is equivalent to 4 cores. This is low, but it was worse before October where one ‘socket’ was 2 cores only.

Worse also for the competitors cloud, as this has not changed and it is still one ‘socket’ for 2 cores. This means that in Azure a SE2 processor licence covers 2 cores, and in Amazon it covers 4 vCPU (which are 4 threads on 2 cores).

Standard Edition 2

Now if you remember that SE2 has a limit of 16 threads for on-premises, the limit on the Cloud is 8 vCPUs on Amazon. You cannot run SE2 on a AWS instance with more than 8 vCPU. This is a big limitation. On-premises, the limit is soft only: resource manager will schedule at most 16 user sessions on CPU. But more resources can be used by background processes. On AWS, you are limited to 8 CPU treads in total, which means that you have only 4 cores for all processes – user and background included. And CPU starvation is not very good when you wait for LGWR.

This is even worse on Azure where the limit is 4 cores without hyper-threading.

This is clearly stated in the ‘Authorized Cloud’ document:
Oracle Standard Edition One and Standard Edition 2 may only be licensed on Authorized Cloud Environment instances up to eight Amazon vCPUs or four Azure CPU Cores.. Here again, the SE2 comes at the price of SE but with limitations of SE1.

Once again, rules are more friendly for Oracle Cloud where you can create a Standard Edition instance on 16 OCPU (which is 32 threads). SE2 resource manager will still limit to 16 user sessions, and they will have one core for each. And background processes will be able use the additional resource coming from hyperthreading.

ULA

There are two important points regarding ULA and ‘authorized clouds': ULA licences can be used during the ULA, but they don’t count in certification to exit ULA.

So what?

I don’t think I’ll risk a conclusion here. Having different price for same software running on similar hardware doesn’t look fair. I’m more annoyed by the difficulty to compare, espacially when rules change from time to time. But I can understand that vendors want to push their own products.

Let’s get technical. My recommandation is to use the minimum of CPU resources so that you don’t have to run your application on dozens of vCPUs. This is why we like Oracle: it is very efficient both for OLTP and BI. Most of the code used by your critical use-cases was optimized decades ago for mono-core processors. Look at an AWR report, identify the queries that are responsible for most of CPU and tune them. Stop hard parsing over and over. Stop reading always the same blocks. Stop doing row-by-row round-trips to your database. Don’t waste your CPU cycles doing all this unnecessary work and you will save money, whatever the ‘core factor’ is.

 

Cet article Oracle Core factor and Oracle or non-Oracle Cloud est apparu en premier sur Blog dbi services.

Oracle SQL List Tables in Database or Schema

Complete IT Professional - Mon, 2017-01-30 05:00
Do you need to list all tables in Oracle? There isn’t an SQL list tables command, but you can learn how to do show all tables in this article. Oracle SQL List Tables If you’ve forgotten the name of a specific table, or forgotten how to spell a table (was it plural or singular? One word […]
Categories: Development

Finding if record is overlap or not

Tom Kyte - Mon, 2017-01-30 04:06
Hi Tom, Hi Tom, Could you please help me with the SQL in finding if the record is a overlap or not. Here is the test case.. seq_val is calculated first and would like to check overlaps based on that order.. ID Start_dt end_dt Seq_val 1 01/0...
Categories: DBA Blogs

Range scan on compound keys

Tom Kyte - Mon, 2017-01-30 04:06
Hi Tom, Is there a trick you can use in a WHERE clause to get the optimiser to spot that you are trying to do a range scan on a composite key? For example, if I have a table which contains YEAR and PERIOD and I have an index on those columns what ...
Categories: DBA Blogs

ORA-29400: data cartridge error -- Windows

Tom Kyte - Mon, 2017-01-30 04:06
Hi, Ever since I have upgraded my database from 11.2.0.2 to 11.2.0.4 I started seeing these errors in my alert log: Sat Dec 10 06:00:12 2016 DBMS_STATS: GATHER_STATS_JOB encountered errors. Check the trace file. Errors in file D:\APP\ORAC...
Categories: DBA Blogs

"ORA-01733: virtual column not allowed here" from update on view when SELECT FOR UPDATE OF

Tom Kyte - Mon, 2017-01-30 04:06
<code>Tom, I have problem while updating or even selecting if the statement contains FOR UPDATE OF clause against a view where instead of trigger perform insert, update, delete. One of view column is a calculated column which contains decode functio...
Categories: DBA Blogs

Histogram in oracle

Tom Kyte - Mon, 2017-01-30 04:06
Hi Tom, Please tell me about Histogram in Oracle give an example?
Categories: DBA Blogs

restore single table from rman full backup

Tom Kyte - Mon, 2017-01-30 04:06
Hi I have a rman full backup and I want to restore single table due to drop that table how can I do this. I can not create a clone due to space and resources issue.. Thanks
Categories: DBA Blogs

Quick Pick in Apex Report

Jeff Kemp - Mon, 2017-01-30 02:08

I have an Interactive Report that includes some editable columns, and the users wanted to include some “quick picks” on these columns to make it easy to copy data from a previous period. The user can choose to type in a new value, or click the “quick pick” to quickly data-enter the suggested value.

example-report-quickpick

Normally, a simple page item can have a quick pick by setting the Show Quick Picks attribute on the item. This is not, however, available as an option when generating Apex items in a report.

To do this, I added code like the following to my report query:

SELECT ...
      ,APEX_ITEM.textarea(5,x.ytd_comments
         ,p_rows => 1
         ,p_cols => 30
         ,p_item_id => 'f05_'||to_char(rownum,'fm00000'))
       || case when x.prev_ytd_comments is not null
          then '<a href="javascript:$(''#'
            || 'f05_' || to_char(rownum,'fm00000')
            || ''').val('
            || apex_escape.js_literal(x.prev_ytd_comments)
            || ').trigger(''change'')">'
            || apex_escape.html(x.prev_ytd_comments)
            || '</a>'
          end
       as edit_ytd_comments
FROM my_report_view x;

This results in the following HTML code being generated:

html-report-quickpick

In the report definition, the EDIT_YTD_COMMENTS column has Escape Special Characters set to No. This runs a real risk of adding a XSS attack vector to your application, so be very careful to escape any user-entered data (such as prev_ytd_comments in the example above) before allowing it to be included. In this case, the user-entered data is rendered as the link text (so is escaped using APEX_ESCAPE.html) and also within some javascript (so is escaped using APEX_ESCAPE.js_literal).

So, if the data includes any characters that conflict with html or javascript, it is neatly escaped:

html-quickpick-bad

And it is shown on screen as expected, and clicking the link copies the data correctly into the item:

example-quickpick-bad

This technique should, of course, work with most of the different item types you can generate with APEX_ITEM.

Recommended further reading:


Filed under: APEX Tagged: apex-5.0, javascript, tips-&-tricks

Cloning EBS 12.2 Environments Integrated with Oracle Access Manager

Steven Chan - Mon, 2017-01-30 02:05

We have documented procedures for cloning EBS 12.2 environments.  We also have documented procedures for integrating EBS 12.2 environments with Oracle Access Manager (OAM) and Oracle Internet Directory (OID).  The next logical question would be: do we have documented procedures for cloning EBS 12.2 environments that have been integrated with OAM and OID?

Yes, we have published this here:

EBS OAM architecture

This Note provides a certified process and detailed steps to:

  • Clone EBS using Rapid Clone
  • Deregister the cloned EBS instance from OAM and remove AccessGate
  • Remove OID from the cloned EBS instance
  • Integrate the cloned EBS instance with OID
  • Integrate the cloned EBS instance with OAM
  • Reconfigure SSL

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator