Feed aggregator

Latest Release of Industry Leading Oracle Database Now Available in the Cloud, with Oracle Cloud at Customer, and On-Premises

Oracle Press Releases - Mon, 2017-03-06 10:00
Press Release
Latest Release of Industry Leading Oracle Database Now Available in the Cloud, with Oracle Cloud at Customer, and On-Premises Leading-edge Database technology now available in all environments

Redwood Shores, Calif.—Mar 6, 2017

Oracle today announced that the latest version of the world’s number one database, Oracle Database 12c Release 2 (12.2), is now available everywhere—in the cloud, with Oracle Cloud at Customer, and on-premises. Oracle Database 12.2 includes enhancements to the unique multitenant architecture and in-memory database technologies that provide customers with outstanding consolidation, performance, reliability, and security for all workloads including entry-level development and mission critical workloads. Additionally, the release includes more than 300 new features and enhancements across availability, performance, security, and developer productivity.

“Oracle has led the industry by providing the fastest and most reliable highly secure database for organizations of all sizes, and we continue to innovate and help customers transform to the cloud with minimum effort and risk,” said Andy Mendelsohn, executive vice president of database server technologies, Oracle. “With this announcement, Oracle is completing the rollout of Oracle Database 12.2, making it even easier for organizations to deploy Oracle Database 12c wherever they need it—in the cloud or on-premises.”

Oracle Database 12.2’s massive cloud scalability and real-time analytics offer customers greater agility, faster time to business insights, and real cost advantages. New innovations in Oracle Database 12.2 include:

  • Massive savings for consolidated and SaaS environments with up to 4,096 pluggable databases
  • Increased agility with online cloning and relocation of pluggable databases
  • Greatly accelerated in-memory database performance
  • Offload of real-time in-memory analytics to active standby databases
  • Native database sharding
  • Massive scaling with Oracle Real Application Clusters (RAC)
  • JSON document store enhancements

Multiple independent industry analyst reports recently recognized Oracle Database 12c technology leadership for common database workloads, including online transaction processing, hybrid transactional and analytical processing, data warehousing, internet of things (IoT), and in-memory database. Following are some of these reports:

Showcasing the impact of Oracle Database 12c for SaaS Clouds, Oracle Taleo Talent Management Cloud deployed Oracle Multitenant and achieved 25x more efficiency than using a virtual machine based architecture.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.415.856.5145

User Active or Deactive

Tom Kyte - Mon, 2017-03-06 06:46
I used "alter user scott account lock;" for lock the account. Now i try to access database so oracle prompts a message user is locked. but when i connect via website build in asp.net , I first time also get message via exception handling in front-s...
Categories: DBA Blogs

weird behavior for namespaces of public synonym and normal tables

Tom Kyte - Mon, 2017-03-06 06:46
Hi ,tom I am currently working with synonym in oracle and find the fact that: SQL> create public synonym mysynonym for myschema.mytable; Synonym created. SQL> create table myschema.mytable(a int); ERROR at line 1: ORA-00955: name is alre...
Categories: DBA Blogs

DB Block size greater than 8K

Tom Kyte - Mon, 2017-03-06 06:46
Hi Tom, I am looking for guidance on when to choose a block size greater than 8K for Oracle DB. I have seen few posts from the past which indicated 8K typically should do fine for most scenarios but have always been under the impression that for D...
Categories: DBA Blogs

60 SQL Interview Questions and Answers

Complete IT Professional - Mon, 2017-03-06 05:00
Are you going for a job where you need to know SQL, such as a Database Developer or Database Administrator? Brush up on your interview questions with this extensive list of SQL interview questions. This collection of interview questions on SQL has been collated from my experience with SQL and from various websites. It contains […]
Categories: Development

12cR1 RAC Posts -- 7 : OCR Commands

Hemant K Chitale - Mon, 2017-03-06 03:11
[Yes, I know that 12.2 is now available for download but it will be some time before I have a running 12.2 RAC environment]

Some OCR / OLR Commands :

The OCR is the Cluster Registry.  We also have an OLR that is the Local Registry which is created on a local filesystem.

We can check the consistency of the Registry with ocrcheck.  Note the difference between using oracle (or grid) and using root to run the check.  oracle can't check the OLR and can't do a logical consistency check of the OCR -- both require to be run as root.

[root@collabn1 ~]# su - oracle
[oracle@collabn1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@collabn1 ~]$ ocrcheck -local
PROTL-602: Failed to retrieve data from the local registry
PROCL-26: Error while accessing the physical storage Operating System error [Permission denied] [13]
[oracle@collabn1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1676
Available space (kbytes) : 407892
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

[oracle@collabn1 ~]$ su
Password:
[root@collabn1 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1036
Available space (kbytes) : 408532
ID : 1014277103
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1676
Available space (kbytes) : 407892
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#


Oracle automates backups of the OCR (but not the OLR !).  Below, the -showbackuploc shows the location of backups.

[root@collabn1 oracle]# ocrconfig -showbackuploc
The Oracle Cluster Registry backup location is [/u01/app/12.1.0/grid/cdata/]
[root@collabn1 oracle]# ls -lt /u01/app/12.1.0/grid/cdata
total 1272
-rw-------. 1 root oinstall 503484416 Mar 6 17:03 collabn1.olr
drwxrwxr-x. 2 oracle oinstall 4096 Jan 16 14:12 collabn-cluster
drwxr-xr-x. 2 oracle oinstall 4096 Dec 19 15:06 collabn1
drwxr-xr-x. 2 oracle oinstall 4096 Dec 19 14:37 localhost
[root@collabn1 oracle]# ls -lt /u01/app/12.1.0/grid/cdata/collabn1
total 820
-rw-r--r--. 1 root root 839680 Dec 19 15:06 backup_20161219_150615.olr
[root@collabn1 oracle]# ocrconfig -showbackup

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup00.ocr 0

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/day.ocr 0

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/week.ocr 0

collabn2 2016/12/19 15:47:24 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154724.ocr 0

collabn2 2016/12/19 15:47:16 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154716.ocr 0
[root@collabn1 oracle]#


All recent (4-Hourly, Daily, Weekly) of the OCR are on the "master" node -- collabn1 -- which comes up first in my cluster.    The 19-Dec backups (of the OCR and OLR) are when I started setting up the Cluster.  Note that there are no subsequent (automated) OLR backups.
Note : There are no 4-Hourly/Daily/Weekly backups since 16-Jan because I haven't had my cluster running for long enough for those backups to kick in.

[root@collabn1 oracle]# ocrconfig -local -manualbackup

collabn1 2017/03/06 17:11:29 /u01/app/12.1.0/grid/cdata/collabn1/backup_20170306_171129.olr 0

collabn1 2016/12/19 15:06:15 /u01/app/12.1.0/grid/cdata/collabn1/backup_20161219_150615.olr 0
[root@collabn1 oracle]# ocrconfig -manualbackup

collabn1 2017/03/06 17:12:21 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20170306_171221.ocr 0

collabn2 2016/12/19 15:47:24 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154724.ocr 0

collabn2 2016/12/19 15:47:16 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154716.ocr 0
[root@collabn1 oracle]#


I can run manual backups (the -local is for the OLR) as shown above.

It is important to include these backups in the backup strategy for the filesystem(s) that hold the Grid Infrastructure and RDBMS installations (binaries, configuration files, trace files etc).
.
.
.
Categories: DBA Blogs

Can You Share EBS Database Homes?

Steven Chan - Mon, 2017-03-06 02:04

No. The Oracle E-Business Suite database ORACLE_HOME cannot be shared between multiple EBS instances.

The Oracle E-Business Suite database ORACLE_HOME must be used exclusively for a single EBS database.  It cannot be shared with other Oracle E-Business Suite instances or other applications.  This applies to all EBS releases, including EBS 12.1 and 12.2.

Why does this restriction exist?

Configurations, log files, and more must be unique to a given instance.  Existing tools are designed to work with the a single database associated with a single application.  For example, the EBS pre-clone tool creates a clone directory that is related to a specific database.  AutoConfig is designed to run for a particular application+database combination.

What are the support implications if you ignore this restriction?

Running these tools in an environment where multiple applications are associated with a single database ORACLE_HOME will have unpredictable results.  If you report an issue whose root cause is found to be due to the sharing of a single database ORACLE_HOME between multiple EBS instances, our default recommendation would be to revert to a configuration where each EBS instance has its own database ORACLE_HOME.

Oracle will produce patches only for issues that can be reproduced in an environment where a single database ORACLE_HOME is associated with a single EBS application. 

Categories: APPS Blogs

OUAF 4.3.0.4.0 On its way

Anthony Shorten - Sun, 2017-03-05 15:32

We are currently putting the final touches on the next service pack (SP4) for the latest Oracle Utilities Application Framework release (4.3). This is a very exciting release for us with a lot of functionality that we are using for the cloud implementations of our products being made available to customers on cloud as well as customers on non-cloud implementations.

Over the next few weeks I will be releasing a series of articles, highlighting some of  the major changes we have introduced into the service pack that will be of interest to people in the field for their non-cloud implementations.

The release adds new functionality, updates existing functionality and retires functionality that we have previously announced as deprecated. You will start seeing products released based upon this new service pack in the upcoming months.

It is a very exciting time for Oracle Utilities and this release will be a foundation for even more exciting functionality we have planned going forward.

OUAF 4.3.0.4.0 On its way

Anthony Shorten - Sun, 2017-03-05 15:32

We are currently putting the final touches on the next service pack (SP4) for the latest Oracle Utilities Application Framework release (4.3). This is a very exciting release for us with a lot of functionality that we are using for the cloud implementations of our products being made available to customers on cloud as well as customers on non-cloud implementations.

Over the next few weeks I will be releasing a series of articles, highlighting some of  the major changes we have introduced into the service pack that will be of interest to people in the field for their non-cloud implementations.

The release adds new functionality, updates existing functionality and retires functionality that we have previously announced as deprecated. You will start seeing products released based upon this new service pack in the upcoming months.

It is a very exciting time for Oracle Utilities and this release will be a foundation for even more exciting functionality we have planned going forward.

EMEA Edge Conference

Anthony Shorten - Sun, 2017-03-05 15:24

I will be attending the EMEA Edge Conference in Reading UK which will be conducted on April 25-26th 2017. I am planning to hold the same technical sessions as I did at the AMER conference earlier this year. As with that conference the sessions are a combination of what we have achieved, what we are planning and some tips and techniques to take back to your implementations of the products.

I would like to thank the participants of my AMER and JAPAC sessions who provided me with valuable insight into the market which we can factor into our ongoing roadmaps.

The sessions we are planning at outlined in my previous blog entry on the edge technical stream.

EMEA Edge Conference

Anthony Shorten - Sun, 2017-03-05 15:24

I will be attending the EMEA Edge Conference in Reading UK which will be conducted on April 25-26th 2017. I am planning to hold the same technical sessions as I did at the AMER conference earlier this year. As with that conference the sessions are a combination of what we have achieved, what we are planning and some tips and techniques to take back to your implementations of the products.

I would like to thank the participants of my AMER and JAPAC sessions who provided me with valuable insight into the market which we can factor into our ongoing roadmaps.

The sessions we are planning at outlined in my previous blog entry on the edge technical stream.

undo tablespace

Tom Kyte - Sun, 2017-03-05 12:26
Hi Tom, I have few questions related to undo tablespace. 1)how to startup the database if undo datafile lost and no backup(without undo how uncommitted transactions will be rolled back) 2)In rac if one undo datafile get corrupted,only that i...
Categories: DBA Blogs

Cursor with FOR UPDATE NOWAIT clause UTL_FILE.FOREMOVE ORA-29285: file write error-

Tom Kyte - Sun, 2017-03-05 12:26
Dear Experts, I am having problem with a oracle proc inside package which writes named xxx.txt file on the linux server. cursor with FOR UPDATE NOWAIT clause fetch data from the tables and write data into the file using the UTL_FILE oracle functio...
Categories: DBA Blogs

Bad, Bad data

Kubilay Çilkara - Sat, 2017-03-04 15:10



One should feel really sorry about anyone who will rely on filtering and making a decision based on bad, bad data. It is going to be a bad decision.


This is serious stuff. I read the other day a recent study by IBM which shows that "Bad Data" costs US $3.1 trillion per year!


OK, let's say you don't mind the money and have money to burn, how about the implications of using the bad data? As the article hints these could be misinformation, wrong calculations, bad products, weak decisions mind you these will be weak/wrong 'data' driven decisions. Opportunities can be lost here.


So why all this badness, isn't it preventable? Can't we not do something about it?


Here are some options


1) Data Cleansing: This is a reactive solution where you clean, disambiguate, correct, change the bad data and put the right values in the database after you find the bad data. This is something you do when is too late and you have bad data already. Better late than never. A rather expensive and very time consuming solution. Nevermind the chance that you can still get it wrong. There are tools out there which you can buy and can help you do data cleansing. These tools will 'de-dupe' and correct the bad data up to a point. Of course data cleansing tools alone are  not enough, you will still need those data engineers and data experts who know your data or who can study your data to guide you. Data cleansing is the damage control option. It is a solution hinted in the article as well.


2) Good Database Design: Use constraints! My favourite option. Constraints are key at the design time of the database, do put native database checks and constraints in the database entities and tables to guarantee the entity integrity and referential integrity of your database schema, validate more! Do not just create plain simple database tables, always think of ways to enforce the integrity of your data. Do not rely only on code. Prevent NULLS or empty strings as much as you can at database design time, put unique indexes and logical check constraints inside the database. Use database tools and features you are already paying for in your database license and already available to you, do not re-invent the wheel, validate your data. This way you will prevent the 'creation' of bad data at the source! Take a proactive approach. In projects don't just skip the database constraints and say I will do it in the app or later. You know you will not, chances are you will forget it in most of the cases. Also apps can change, but databases tend to outlast the apps. Look at a primer on how to do Database Design


My modus operandi is option 2, a Good Database Design and data engineering can save you money, a lot of money, don't rush into projects with neglecting or skipping database tasks, engage the data experts, software engineers with the business find out the requirements, talk about them, ask many questions and do data models. Reverse engineer everything in your database, have a look. Know your data! That's the only way to have good, integral and reliable true data, and it will help you and your customers win.

Categories: DBA Blogs

Purging Unified Audit Trail in 12cR1

Yann Neuhaus - Sat, 2017-03-04 11:24

When you want to empty a table you have two methods: delete and truncate. If, for any reason (see previous post) the Unified Audit Trail has become too big, you cannot directly delete or truncate the table. You must call the dbms_audit_mgmt.clean_audit_trail. But then you want to know if it will do slow deletes or quick truncates. Let’s trace it.

I have filled my Unified Audit Trail with hundred of thousands failed logins:
SQL> select unified_audit_policies,action_name,count(*) from unified_audit_trail group by unified_audit_policies,action_name;
 
UNIFIED_AUDIT_POLICIES ACTION_NAME COUNT(*)
---------------------------------------- -------------------- ----------
EXECUTE 2
ORA_LOGON_FAILURES LOGON 255799

We have two methods to purge: purge records older than a timestamp or purge all.

Purge old

Auditing is different than logging. It’s a security feature. The goal is not to keep only recent information by specifying a retention. The goal is to read, process and archive the records, and then set a timestamp to the high water mark that has been processed. Then a background job will delete what is before this timestamp.

I set the timestamp to 6 hours before now

SQL> exec dbms_audit_mgmt.set_last_archive_timestamp(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,last_archive_time=>sysdate-6/24);
PL/SQL procedure successfully completed.

And call the clean procedure:

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,use_last_arch_timestamp=TRUE);
PL/SQL procedure successfully completed.

This was fast but let’s look at the tkprof. Besides some select, I see a delete on the CLI_SWP$ table that stores the Unified Audit Trail in Secure File LOBs

delete from "CLI_SWP$2f516430$1$1" partition("HIGH_PART")
where
max_time < :1
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.47 1.82 20 650 47548 6279
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.47 1.82 20 650 47548 6279
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 7 (recursive depth: 1)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE CLI_SWP$2f516430$1$1 (cr=650 pr=20 pw=0 time=1827790 us)
6279 6279 6279 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=248 pr=0 pw=0 time=15469 us cost=5 size=18020 card=530)
6279 6279 6279 TABLE ACCESS FULL CLI_SWP$2f516430$1$1 PARTITION: 1 1 (cr=248 pr=0 pw=0 time=10068 us cost=5 size=18020 card=530)

I will not go into the detail there. This delete may be optimized (120000 audit trail records were actually deleted here behind those 6000 rows. This table is partitioned, and we can expect that old partitions are truncated but there are many bugs with that. On lot of environments we see all rows in HIGH_PART.
This is improved in 12cR2 and will be the subject of a future post. I you have a huge audit trail to purge, then conventional delete is not optimal.

Purge all

I have still lot of rows remaining.

SQL> select unified_audit_policies,action_name,count(*) from unified_audit_trail group by unified_audit_policies,action_name;
 
UNIFIED_AUDIT_POLICIES ACTION_NAME COUNT(*)
---------------------------------------- -------------------- ----------
EXECUTE 4
ORA_LOGON_FAILURES LOGON 136149

When purging all without setting a timestamp, I expect a truncate which is faster than deletes. Let’s try it and trace it.

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified
,use_last_arch_timestamp=FALSE);
PL/SQL procedure successfully completed.

First, there seem to be an internal log acquired:
SELECT LOCKID FROM DBMS_LOCK_ALLOCATED WHERE NAME = :B1 FOR UPDATE
UPDATE DBMS_LOCK_ALLOCATED SET EXPIRATION = SYSDATE + (:B1 /86400) WHERE ROWID = :B2

Then a partition split:
alter table "CLI_SWP$2f516430$1$1" split partition high_part at (3013381) into (partition "PART_6", partition high_part lob(log_piece) store as securefile (cache logging tablespace SYSAUX) tablespace "SYSAUX")

The split point is the current timestamp SCN:

SQL> select scn_to_timestamp(3013381) from dual;
 
SCN_TO_TIMESTAMP(3013381)
---------------------------------------------------------------------------
02-MAR-17 05.59.06.000000000 PM

This is the time when I’ve run the purge and this is probably used to ‘truncate’ all previous partition but keep the on-going one.

Then , there is no TRUNCATE in the trace, but something similar: some segments are dropped:

delete from seg$
where
ts#=:1 and file#=:2 and block#=:3
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 18 12 6
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.00 0 18 12 6

There is finally a delete, but with no rows to delete as the rows were in the dropped segments:

delete from "CLI_SWP$2f516430$1$1" partition("HIGH_PART")
where
max_time < :1
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 3 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 0 0
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 7 (recursive depth: 1)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE CLI_SWP$2f516430$1$1 (cr=3 pr=0 pw=0 time=61 us)
0 0 0 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=3 pr=0 pw=0 time=57 us cost=5 size=2310 card=33)
0 0 0 TABLE ACCESS FULL CLI_SWP$2f516430$1$1 PARTITION: 1 1 (cr=3 pr=0 pw=0 time=48 us cost=5 size=2310 card=33)

So what?

Cleaning the Unified Audit Trail is done with internal statements but looks like a delete when use_last_arch_timestamp=TRUE or a truncate when use_last_arch_timestamp=FALSE. This means that we can use this procedure when AUDSYS has grown too much. However, there are a few bug with this internal table, partitioned even when partitioning is not allowed. The implementation has changed in 12.2 so the next blog post will show the same test on 12cR2.

 

Cet article Purging Unified Audit Trail in 12cR1 est apparu en premier sur Blog dbi services.

12cR2 new features for Developers and DBAs - Here is my pick (Part 2)

Syed Jaffar - Sat, 2017-03-04 10:29
In Part 1, I have outlined a few (my pick) 12cR2 new features useful for Developers and DBAs. In the part 2, I am going to discuss a few more new features.
Read/Write and Read-Only Instances
Read-write and read-only database instances of the same primary database can coexist in an Oracle Flex Cluster.
Advanced Index Compression
Prior to this release, the only form of advanced index compression was low compression. Now you can also specify high compression. High compression provides even more space savings than low compression.
PDBs Enhancements
  • I/O Rate Limits for PDBs
  • Different character sets of PDBs in a CDB
  • PDB refresh to periodically propagate changes from a source PDB to its cloned copy
  • CONTAINERS hint : When a CONTAINERS ()query is submitted, recursive SQL statements are generated and executed in each PDB. Hints can be passed to these recursive SQL statements by using the CONTAINERS statement-level hint. 
  • Cloning PDB no longer to be in R/W mode : Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB.
  • Near Zero Downtime PDB Relocation:This new feature significantly reduces downtime by leveraging the clone functionality to relocate a pluggable database (PDB) from one multitenant container database (CDB) to another CDB. The source PDB is still open and fully functional while the actual cloning operation is taking place.
  • Proxy PDB: A proxy pluggable database (PDB) provides fully functional access to another PDB in a remote multitenant container database (CDB). This feature enables you to build location-transparent applications that can aggregate data from multiple sources that are in the same data center or distributed across data centers.
Oracle Data Pump Parallel Export of Metadata: The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, is extended to include metadata export operations. The performance of Oracle Data Pump export jobs is improved by enabling the use of multiple processes working in parallel to export metadata.
Renaming Data Files During Import
Oracle RAC :
  • Server Weight-Based Node Eviction :Server weight-based node eviction acts as a tie-breaker mechanism in situations where Oracle Clusterware needs to evict a particular node or a group of nodes from a cluster, in which all nodes represent an equal choice for eviction. In such cases, the server weight-based node eviction mechanism helps to identify the node or the group of nodes to be evicted based on additional information about the load on those servers. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism exist to provide respective guidance.
  • Load-Aware Resource Placement : Load-aware resource placement prevents overloading a server with more applications than the server is capable of running. The metrics used to determine whether an application can be started on a given server, either as part of the startup or as a result of a failover, are based on the anticipated resource consumption of the application as well as the capacity of the server in terms of CPU and memory.
Enhanced Rapid Home Provisioning and Patch Management
TDE Tablespace Live Conversion: You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. A TDE tablespace can be easily deployed, performing the initial encryption that migrates to an encrypted tablespace with zero downtime. This feature also enables automated deep rotation of data encryption keys used by TDE tablespace encryption in the background with zero downtime.
Fully Encrypted Database: Transparent Data Encryption (TDE) tablespace encryption is applied to database internals including SYSTEM, SYSAUX, and UNDO.
TDE Tablespace Offline Conversion: This release introduces new SQL commands to encrypt tablespace files in place with no storage overhead. You can do this on multiple instances across multiple cores. Using this feature requires downtime, because you must take the tablespace temporarily offline. With Data Guard configurations, you can either encrypt the physical standby first and switchover, or encrypt the primary database, one tablespace at a time.

Is your DBA_FEATURE_USAGE_STATISTICS up-to-date?

Yann Neuhaus - Sat, 2017-03-04 05:35

Last day we were doing a licensing review for a client. As many dbas may already know, this require to execute some oracle scripts at OS level and database level.
Among these scripts we have the script options_packs_usage_statistics.sql (docId 1317265.1) which is an official oracle script to check the usage of separately licensed Oracle Database Options/Management Packs
This script is using the DBA_FEATURE_USAGE_STATISTICS table to retrieve info. And sometimes it may happen that data of this table are not recent.
One important thing is that the DBA_FEATURE_USAGE_STATISTICS are based on the most recent sample in the column LAST_SAMPLE_DATE. In our case we got following results (outputs are truncated).

SYSDATE |
-------------------|
2017.02.17_13.36.44|


PRODUCT |LAST_SAMPLE_DATE |
-------------------------------|-------------------|
Active Data Guard |2014.01.02_13.37.53|
Advanced Analytics |2014.01.02_13.37.53|
Advanced Compression |2014.01.02_13.37.53|
Advanced Security |2014.01.02_13.37.53|
Database Vault |2014.01.02_13.37.53|
Diagnostics Pack |2014.01.02_13.37.53|
Label Security |2014.01.02_13.37.53|
OLAP |2014.01.02_13.37.53|
Partitioning |2014.01.02_13.37.53|
Real Application Clusters |2014.01.02_13.37.53|
Real Application Testing |2014.01.02_13.37.53|
Tuning Pack |2014.01.02_13.37.53|
.Exadata |2014.01.02_13.37.53|

If we compare sysdate and the date of the last_sample_date, we can see that we have to manually refresh our DBA_FEATURE_USAGE_STATISTICS data.
One way to do this is to run the procedure

exec dbms_feature_usage_internal.exec_db_usage_sampling(SYSDATE);

In our case the procedure did not refresh our data despite the fact that there was any error and we received message that procedure was successfully executed.

SQL> exec dbms_feature_usage_internal.exec_db_usage_sampling(SYSDATE);
PL/SQL procedure successfully completed.


SQL> select max(last_sample_date) from dba_feature_usage_statistics order by 1;
MAX(LAST_
---------
02-JAN-14

Following oracle document 1629485.1 we were able to refresh the last_sample_date using this ALTER SESSION
code>
SQL> alter session set “_SWRF_TEST_ACTION”=53;
Session altered.


SQL> alter session set NLS_DATE_FORMAT='DD/MM/YYYY HH24:MI:SS';
Session altered.


SQL> select MAX(LAST_SAMPLE_DATE) from dba_feature_usage_statistics;
MAX(LAST_SAMPLE_DAT
-------------------
16/02/2017 13:44:46

Hope this article may help

 

Cet article Is your DBA_FEATURE_USAGE_STATISTICS up-to-date? est apparu en premier sur Blog dbi services.

Sharding with Oracle 12c R2 Part I

Yann Neuhaus - Sat, 2017-03-04 05:33

Oracle 12.2 comes with many new features. In this article we are going to talk about sharding. It is a database scaling technique based on horizontal partitioning of data across multiple oracle databases called sharded databases (SDB). Each shard contains the table with the same columns but a different subset of rows. Sharding can be represented like this
shar1
For DBA: SDB is in fact multiples databases that can be managed collectively or individually
There are 3 methods of sharding System-managed sharding,User-defined sharding and Composite sharding
In this article we are using System-managed sharding where data are automatically distributed across shards using partitioning by consistent hash. It is the most used. We will just demonstrate how it is possible to create shards using oracle. In next articles we will show how we can connect to these shards and how it is possible to add new shards.
What do we need?
Oracle Database 12c Release 2 : linuxx64_12201_database.zip
Oracle Database 12c Release 2 Global Service Manager : linuxx64_12201_gsm.zip

In this demo we use following configuration to create sharded databases on sharddemo2 and sharddemo3.
VM sharddemo1:  catalog
VM sharddemo2: shard
VM sharddemo3: shard

Oracle 12.2 should be installed on all servers. We will not show oracle software installation.
GSM software should be installed on the catalog sharddemo1.

After unzipping the file just launch the installer, just launch the runInstaller
[oracle@sharddemo1 gsm122]$ ./runInstaller

gsm1

gsm2

gsm3

gsm4

gsm5


[root@sharddemo1 oracle122]# /u01/app/oracle/product/12.2.0.1/gsmhome_1/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.2.0.1/gsmhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@sharddemo1 oracle122]#

gsm6

The second step is to create the catalog database on sharddemo1. We will name it ORCLCAT (NON-CDB). Some database parameters need to be configured for ORCLCAT. Database creation is not shown here.

SQL> alter system set db_create_file_dest='/u01/app/oracle/oradata/’;
System altered.
SQL> alter system set open_links=16 scope=spfile;
System altered.
SQL> alter system set open_links_per_instance=16 scope=spfile;
System altered.

Oracle 12.2 database comes with a schema  gsmcatuser schema. This schema is used by the shard director while connecting to the shard catalog database. This schema is locked by default, so we have to unlock it.

SQL> alter user gsmcatuser account unlock;
User altered.
SQL> alter user gsmcatuser identified by root;
User altered.

We also have to create the gsm  administrator schema (mygdsadmin in our case) and give him  the required  privileges

SQL> create user mygdsadmin identified by root;
User created.
SQL> grant connect, create session, gsmadmin_role to mygdsadmin;
Grant succeeded.
SQL> grant inherit privileges on user SYS to GSMADMIN_INTERNAL;
Grant succeeded.
The next step is to configure the scheduler on the shard catalog by setting  the remote scheduler’s http port and the agent registration password on the shard catalog database ORCLCAT

SQL> execute dbms_xdb.sethttpport(8080);
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> @?/rdbms/admin/prvtrsch.plb


SQL> exec DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS('welcome');
PL/SQL procedure successfully completed.
SQL>

We have now to register sharddemo2 and sharddemo3 agents in the scheduler. The executable schagent in $ORACLE_HOME/bin is used. After registration, agents should be started
Below registration for sharddemo2

[oracle@sharddemo2 ~]$ schagent -start
Scheduler agent started using port 21440
[oracle@sharddemo2 ~]$ echo welcome | schagent -registerdatabase sharddemo1 8080
Agent Registration Password ?
Oracle Scheduler Agent Registration for 12.2.0.1.2 Agent
Agent Registration Successful!
[oracle@sharddemo2 ~]$

After agent registration, corresponding directories for database must be created on sharddemo2 and shardddemo3

[oracle@sharddemo2 ~]$ mkdir /u01/app/oracle/oradata
[oracle@sharddemo2 ~]$ mkdir /u01/app/oracle/fast_recovery_area

Now it’s time to launch the Global Data Services Control Utility (GDSCTL) on sharddemo1. GDSCTL is in $GSM_HOME/bin in our case /u01/app/oracle/product/12.2.0.1/gsmhome_1/bin

[oracle@sharddemo1 ~]$ gdsctl
GDSCTL: Version 12.2.0.1.0 - Production on Thu Mar 02 13:53:50 CET 2017
Copyright (c) 2011, 2016, Oracle. All rights reserved.
Welcome to GDSCTL, type "help" for information.
Warning: current GSM name is not set automatically because gsm.ora contains zero or several GSM entries. Use "set gsm" command to set GSM for the session.
Current GSM is set to GSMORA
GDSCTL>

And to create the shardcatalog

GDSCTL>create shardcatalog -database sharddemo1:1521:ORCLCAT -chunks 12 -user mygdsadmin/root -sdb cust_sdb -region region1
Catalog is created
GDSCTL>

Now let’s create and start the shard director. The listener of the gsm should use a free port. In our case the port is 1571

GDSCTL>add gsm -gsm region1_director -listener 1571 -pwd root -catalog sharddemo1:1521:ORCLCAT -region region1
GSM successfully added


GDSCTL>start gsm -gsm region1_director
GSM is started successfully
GDSCTL>


GDSCTL>status gsm
Alias REGION1_DIRECTOR
Version 12.2.0.1.0
Start Date 02-MAR-2017 14:03:36
Trace Level off
Listener Log File /u01/app/oracle/diag/gsm/sharddemo1/region1_director/alert/log.xml
Listener Trace File /u01/app/oracle/diag/gsm/sharddemo1/region1_director/trace/ora_21814_140615026692480.trc
Endpoint summary (ADDRESS=(HOST=sharddemo1.localdomain)(PORT=1571)(PROTOCOL=tcp))
GSMOCI Version 2.2.1
Mastership Y
Connected to GDS catalog Y
Process Id 21818
Number of reconnections 0
Pending tasks. Total 0
Tasks in process. Total 0
Regional Mastership TRUE
Total messages published 0
Time Zone +01:00
Orphaned Buddy Regions:
None
GDS region region1
GDSCTL>

We also have to set the scheduler agent password to “welcome” in gdsctl

GDSCTL>modify catalog -agent_password welcome
The operation completed successfully
GDSCTL>

The OS credential for the user “oracle” must be defined. We are using the same OS credential for all the shards

GDSCTL>add credential -credential oracle_cred -osaccount oracle -ospassword root
The operation completed successfully
GDSCTL>

Before deploying the shards we have to define metadata for them.

GDSCTL>set gsm -gsm region1_director
GDSCTL>connect mygdsadmin/root
Catalog connection is established
GDSCTL>


GDSCTL>add shardgroup -shardgroup shgrp1 -deploy_as primary -region region1
The operation completed successfully
GDSCTL>


GDSCTL>add invitednode sharddemo2


GDSCTL>create shard -shardgroup shgrp1 -destination sharddemo2 -credential oracle_cred
The operation completed successfully
DB Unique Name: sh1
GDSCTL>


GDSCTL>add invitednode sharddemo3


GDSCTL>create shard -shardgroup shgrp1 -destination sharddemo3 -credential oracle_cred
The operation completed successfully
DB Unique Name: sh21
GDSCTL>

We can then verify the status of our configuration

GDSCTL>config shard
Name Shard Group Status State Region Availability
---- ----------- ------ ----- ------ ------------
sh1 shgrp1 U none region1 -
sh21 shgrp1 U none region1 -

If there is no error it’s time to deploy our shards. Deployment is the last step before creating the schema we will use for System Managed Sharding

GDSCTL>deploy
deploy: examining configuration...
deploy: deploying primary shard 'sh1' ...
deploy: network listener configuration successful at destination 'sharddemo2'
deploy: starting DBCA at destination 'sharddemo2' to create primary shard 'sh1' ...
deploy: deploying primary shard 'sh21' ...
deploy: network listener configuration successful at destination 'sharddemo3'
deploy: starting DBCA at destination 'sharddemo3' to create primary shard 'sh21' ...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: DBCA primary creation job succeeded at destination 'sharddemo3' for shard 'sh21'
deploy: waiting for 1 DBCA primary creation job(s) to complete...
deploy: DBCA primary creation job succeeded at destination 'sharddemo2' for shard 'sh1'
deploy: requesting Data Guard configuration on shards via GSM
deploy: shards configured successfully
The operation completed successfully
GDSCTL>

The command may take some times. Once done running again the config command may return.

GDSCTL>config shard
Name Shard Group Status State Region Availability
---- ----------- ------ ----- ------ ------------
sh1 shgrp1 Ok Deployed region1 ONLINE
sh21 shgrp1 Ok Deployed region1 ONLINE

We should have two instances running on sharddemo2 and sharddemo3: sh1 and sh21.
Now that shards are deployed let’s create in the shardcatalog database ORCLCAT the schema we will use for sharding. Here the user is called shard_user.

SQL> show parameter instance_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
instance_name string ORCLCAT
SQL>


SQL>alter session enable shard ddl;
SQL>create user user_shard identified by user_shard;
SQL>grant connect, resource, alter session to user_shard;
SQL>grant execute on dbms_crypto to user_shard;
SQL>grant create table, create procedure, create tablespace, create materialized view to user_shard;
SQL>grant unlimited tablespace to user_shard;
SQL>grant select_catalog_role to user_shard;
SQL>grant all privileges to user_shard;
SQL>grant gsmadmin_role to user_shard;
SQL>grant dba to user_shard;

In a sharding environment, we have two types of tables
Sharded tables : data are distributed in the different shards
Duplicated tables: data are duplicated in the different shards
Let’s create tablespaces for each type of tables

SQL> CREATE TABLESPACE SET TAB_PRIMA_SET using template (datafile size 100m autoextend on next 10M maxsize unlimited extent management local segment space management auto );
Tablespace created.


SQL> CREATE TABLESPACE TAB_PRODUCT datafile size 100m autoextend on next 10M maxsize unlimited extent management local uniform size 1m;
Tablespace created

Now under user_shard schema let’s create Sharded and Duplicated tables in ORCLCAT

CREATE SHARDED TABLE Customers
(
CustId VARCHAR2(60) NOT NULL,
FirstName VARCHAR2(60),
LastName VARCHAR2(60),
Class VARCHAR2(10),
Geo VARCHAR2(8),
CustProfile VARCHAR2(4000),
CONSTRAINT pk_customers PRIMARY KEY (CustId),
CONSTRAINT json_customers CHECK (CustProfile IS JSON)
) TABLESPACE SET TAB_PRIMA_SET PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;
Table created.


CREATE SHARDED TABLE Orders
(
OrderId INTEGER NOT NULL,
CustId VARCHAR2(60) NOT NULL,
OrderDate TIMESTAMP NOT NULL,
SumTotal NUMBER(19,4),
Status CHAR(4),
constraint pk_orders primary key (CustId, OrderId),
constraint fk_orders_parent foreign key (CustId)
references Customers on delete cascade
) partition by reference (fk_orders_parent);


CREATE DUPLICATED TABLE Products
(
ProductId INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
Name VARCHAR2(128),
DescrUri VARCHAR2(128),
LastPrice NUMBER(19,4)
) TABLESPACE TAB_PRODUCT;
Table created.

Some checks can be done on both instances (ORCLCAT, sh1, sh21) to verify that tablespaces, sharded tables, duplcated tables are created for example.

SQL> select name from v$database;
NAME
---------
ORCLCAT

SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by tablespace_name;
TABLESPACE_NAME MB
------------------------------ ----------
SYSAUX 660
SYSTEM 890
TAB_PRIMA_SET 100
TAB_PRODUCT 100
UNDOTBS1 110
USERS 5

on sh1 and sh21

select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by tablespace_name;
C001TAB_PRIMA_SET 100
C002TAB_PRIMA_SET 100
C003TAB_PRIMA_SET 100
C004TAB_PRIMA_SET 100
C005TAB_PRIMA_SET 100
C006TAB_PRIMA_SET 100
SYSAUX 660
SYSTEM 890
SYS_SHARD_TS 100
TAB_PRIMA_SET 100
TAB_PRODUCT 100
UNDOTBS1 115
USERS 5

On sharddemo2 (sh1) for example verify that the chunks and chunk tablespaces are created

SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions where tablespace_name like 'C%TAB_PRIMA_SET' order by tablespace_name;
CUSTOMERS CUSTOMERS_P1 C001TAB_PRIMA_SET
ORDERS CUSTOMERS_P1 C001TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P2 C002TAB_PRIMA_SET
ORDERS CUSTOMERS_P2 C002TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P3 C003TAB_PRIMA_SET
ORDERS CUSTOMERS_P3 C003TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P4 C004TAB_PRIMA_SET
ORDERS CUSTOMERS_P4 C004TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P5 C005TAB_PRIMA_SET
ORDERS CUSTOMERS_P5 C005TAB_PRIMA_SET
CUSTOMERS CUSTOMERS_P6 C006TAB_PRIMA_SET
ORDERS CUSTOMERS_P6 C006TAB_PRIMA_SET

On ORCLCAT

SQL> select table_name from user_tables;
TABLE_NAME
---------------------------------------------------
MLOG$_PRODUCTS
PRODUCTS
CUSTOMERS
ORDERS
RUPD$_PRODUCTS

On sh1

SQL> select table_name from user_tables;
TABLE_NAME
------------------------------------------------------------
PRODUCTS
CUSTOMERS
ORDERS

on sh21

SQL> select table_name from user_tables;
TABLE_NAME
--------------------------------------------------------------
PRODUCTS
CUSTOMERS
ORDERS

Using gdsctl on sharddemo1 we can also see ddl executed by using the show ddl command on the gsm interface

GDSCTL>show ddl
id DDL Text Failed shards
-- -------- -------------
6 grant select_catalog_role to user_shard;
7 grant all privileges to user_shard;
8 grant gsmadmin_role to user_shard;
9 grant dba to user_shard;
10 CREATE TABLESPACE SET TAB_PRIMA_SET u...
11 CREATE TABLESPACE TAB_PRODUCT datafil...
12 CREATE SHARDED TABLE Customers ( Cust...
13 CREATE SHARDED TABLE Orders ( OrderId...
14 create database link "PRODUCTSDBLINK@...
15 CREATE MATERIALIZED VIEW "PRODUCTS" ...

And now our sharding should work. After inserting some data we can see that for duplicated tables whole data are replicated into the different shards
On ORCLCAT for example the number of rows for table products is 9

SQL> select count(*) from products;
COUNT(*)
----------
9

On sh1 as product is a duplucated table, the number of rows should be 9

SQL> select count(*) from products;
COUNT(*)
----------
9

Same for table product in on sh21

SQL> select count(*) from products;
COUNT(*)
----------
9

For sharded tables, we can see that rows are distributed

On ORCLCAT

SQL> select count(*) from customers;
COUNT(*)
----------
14

On sh1

SQL> select count(*) from customers;
COUNT(*)
----------
6

On sh21 number of rows of customers should be 8

SQL> select count(*) from customers;
COUNT(*)
----------
8

Conclusion
In this first part we talked about sharding configuration. We have seen how using Oracle Global Data Services we can create shards. In a second part we will see how to connect to shards and how scalibilty is possible in a shard environment

 

Cet article Sharding with Oracle 12c R2 Part I est apparu en premier sur Blog dbi services.

How to embed HTTP content inside a HTTPS webpage / Mixed content problems

Dietrich Schroff - Sat, 2017-03-04 03:27
If you are running a webpage and you decide to move to SSL protection you can encounter the following problem: Inside your webpage you are using tags like "iframe", "script" or "link" pointing to HTTP servers. This is considered as mixed active content (mozilla):

Mixed active content is content that has access to all or parts of the Document Object Model of the HTTPS page. This type of mixed content can alter the behavior of the HTTPS page and potentially steal sensitive data from the user. Hence, in addition to the risks described for mixed display content above, mixed active content is vulnerable to a few other attack vectors.
And this will not work...

The best solution is: change all links from HTTP to HTTPS and you are done.

But there are still websites which offer their content in HTTP only. If you really trust them, you can do the following:
Add the link inside a https proxy like https://ssl-proxy.my-addr.org/myaddrproxy.php/http/yourlink

Of course this is not an excellent solution, but a workaround which allows you to protect your website and if you seperate this solution from the pages, which deal with sensitive content you should be fine...

DBMS_COMPARISON: ora-23626: 'schema.indexname' not eligible index error

Tom Kyte - Fri, 2017-03-03 23:46
Hi Tom, I have a very large table with over 850 million rows of data. We are using CDC to extract the data from the source system to a target for publication and etl to a datawarehouse and ODS. I have a requirement to run periodic checks to ensu...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator