Feed aggregator

TensorFlow - Getting Started with Docker Container and Jupyter Notebook

Andrejus Baranovski - Thu, 2017-11-23 10:38
I'm studying Machine Learning and would like to share some intro experience working with TensorFlow. To get started with TensorFlow you need to install it, easiest way (at least for me) was to run TensorFlow using Docker. Read installation instructions - Installing TensorFlow.

Once TensorFlow Docker image is installed. I suggest to create container in detach mode (--detach=true) and provide port for Jupyter UI. Make sure to provide meaningful name for Docker container:

docker run --detach=true --name RedSamuraiTensorFlowUI -it -p 8888:8888 gcr.io/tensorflow/tensorflow

Make sure to start and stop container using Docker start/stop commands, don't run and create container each time (this way you will loose your work, since new container will be created each time):

docker start RedSamuraiTensorFlowUI (docker stop RedSamuraiTensorFlowUI)

Once container is running in detached mode, you can access logs by executing docker logs command and specifying container name:

docker logs -f RedSamuraiTensorFlowUI

At this point you should see output in Docker container log, copy URL to Jupyter UI with token and paste it to the browser (for example: http://localhost:8888/?token=d0f617a4c719c40ea39a3732447d67fd40ff2028bb335823):


This will give you access to Jupyter UI. Is possible to run TensorFlow Python scripts directly through command line in Docker environment, but is more convenient to do the same through UI:


UI gives option to create new Terminal session:


Terminal allows to run Python code using command line:


Instead of using command line, more convenient is to create new notebook:


Notebook environment allows to type in Python code and execute math calculations. In the example below I multiply two arrays (1x5, 2x6, 3x7, 4x8) in Python code through TensorFlow library. Result is printed through TensorFlow session object right below and prompt for the next command is displayed - very convenient:


Jupyter UI allows to track running notebooks and terminals:


Whatever action you do in Jupyter UI, it can be tracked using log printed in Docker container log. Jupyter UI is client side JS application:


To double check Docker config, I have TensorFlow Docker image:


And Docker container, which can be started/stopped by name (see command listed above), without running new Docker container every time during restart:

[K21Academy Weekly Newsletter] 171123 Happy Thanksgiving & Happy Holidays if you are in USA or Canada

Online Apps DBA - Thu, 2017-11-23 08:04

[K21Academy Weekly Newsletter] 171123 Happy Thanksgiving & Happy Holidays if you are in USA or Canada. Thanks for registering my Weekly Newsletter where you get latest Updates, Tips & How To Get related to Oracle. In this Week, you will find: 1. Oracle EBS (R12) on Cloud for Beginners: 15 Must-Know Things 2. Oracle Fusion […]

The post [K21Academy Weekly Newsletter] 171123 Happy Thanksgiving & Happy Holidays if you are in USA or Canada appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DOAG 2017

Yann Neuhaus - Thu, 2017-11-23 08:00

Als Consultant bei dbi services war ich die letzten 2 Jahre hauptsächlich in Konsolidierungsprojekten basierend auf den Oracle Engineered Systems unterwegs. Deshalb waren für mich natürlich die Vorträge über die neue Generation der Oracle Database Appliance X7-2 interessant.

Aus meiner Sicht hat Oracle den richtigen Schritt getan und die Vielfalt der ODA Systeme wie es sie noch in der X6-2 Generation gab (Small/Medium/Large und HA) reduziert.

Künftig wird es die ODA X7-2 nur noch in 3 Modellen (S, M und HA) geben, wobei klar zu sagen ist, dass die kleineren Systeme leistungstechnisch aufgewertet wurden und die HA endlich wieder in einer Ausstattung zur Verfügung steht, die eine Konsolidierung auch von größeren Datenbank- und Applikationssystemen ermöglicht:

  • die ODA X7-2 S als Einsteigersystem mit einer 10 Core CPU, bis zu 384 GB RAM und 12,8 TB NVMe Storage
  • die ODA X7-2 M entspricht nun eher den X6 L Systemen mit 2×18 Core, bis zu 768 GB RAM und bis 51,2 TB NVMe Storage
  • die ODA X7-2 HA ist natürlich das Flaggschiff der ODA Klasse. 2 Server mit je 2×18 Cores, bis zu 768 GB RAM pro Server und diversen Storage Erweiterungen bis 150 TB geben einem das alte X5 Gefühl (oder vielleicht schon mit einer Exadata) zu arbeiten

Die für mich interessantesten Neuerungen sind weniger im Hardwarebereich zu finden, sondern viel mehr in den möglichen Deployments der Systeme:

  • alle Systeme unterstützen SE/SE1/SE2/EE 11.2.0.4, 12.1.0.2 und 12.2.01
  • alle Systeme unterstützen ein virtualisiertes Setup, die kleinen Systeme mit KVM, die X7-2 HA mit KVM und OVM, wobei bis jetzt noch kein Hardpartitioning mit KVM möglich, aber in Planung ist
  • auf der X7-2 HA kann bei den Storage Erweiterungen zwischen High Performance (SSD) und High Capacity (HDD) gewählt werden, sogar Mischformen sind mit Einschränkungen möglich

Eingespart wurde allerdings bei den Netzwerkschnittstellen, hier gibt es nur noch 2 Interfaces statt bisher 4 wie an der X5-2 (neben den privaten Interfaces für den Interconnect). Es gibt zwar die Möglichkeit ein weiteres Interface nach dem Deployment zu konfigurieren, allerdings nur mit 1GB. Geplant ist zwar, dass künftig auch im Bare Metal Setup VLAN Konfigurationen auf dem Public Interface möglich sind, trotzdem hätte man (insbesondere der HA) noch zwei zusätzliche Interfaces spendieren können, zudem Steckplätze vorhanden sind.

Hoch interessant sind die Leistungsdaten des NVMe bzw. SSD Storage. Hier sind bis zu 100 000 IOPS möglich, bei der HA sehe ich als begrenzenden Faktor eher den SAS Bus als die SDDs. Was wirklich schön ist, dass der Storage für die Redo Logs auf 4x800GB SSD erweitert wurde, hier musste man in den früheren Systemen immer etwas sparsam sein…

Alles in allem freue ich mich darauf mit der X7-2 zu arbeiten, denn Oracle stellt hier ein gutes Stück Hardware bereit, das auch preislich im Rahmen bleibt.

 

 

 

 

 

 

 

 

Cet article DOAG 2017 est apparu en premier sur Blog dbi services.

Why SUM(USER_BYTES) in DBA_DATA_FILES is much larger than SUM(BYTES) in dba_free_space ?

Tom Kyte - Thu, 2017-11-23 07:06
Hello, teams:-) Why SUM(USER_BYTES) in DBA_DATA_FILES is much larger than SUM(BYTES) in dba_free_space ? There has an example that I have given in Oracle 11.2.0.4.0. <code> SYS@orcl28> select round(sum(user_bytes)/(1024*1024*1024),2) fro...
Categories: DBA Blogs

Sending HTML using UTL_SMTP

Tom Kyte - Thu, 2017-11-23 07:06
Hi Tom I hope I'm phrasing this correctly... I'd like to be able to send an HTML formatted email from the database using the UTL_SMTP package. I don't see any way of setting the MIME type. Is this beyond the scope of UTL_SMTP? thanks in ...
Categories: DBA Blogs

CRS-2674: Start of dbfs_mount failed

Michael Dinh - Wed, 2017-11-22 19:04

$ crsctl start resource dbfs_mount
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node2’
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node1’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node2’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node2’
CRS-2681: Clean of ‘dbfs_mount’ on ‘node1’ succeeded
CRS-2681: Clean of ‘dbfs_mount’ on ‘node2’ succeeded
CRS-4000: Command Start failed, or completed with errors.

Check to make sure DBFS_USER password is not expired.


Conditional index

Tom Kyte - Wed, 2017-11-22 12:46
Tom, Thanks for taking my question. I am trying to conditionally index rows in a table. In SQL Server 2008 there is a feature called filtered indexes that allows you to create an index with a where clause. So I have a table abc: <code>create...
Categories: DBA Blogs

Transfer data from one db to another db over db link using trigger

Tom Kyte - Wed, 2017-11-22 12:46
Hi, I am working on a project in which data marts are involved. We are creating triggers to transfer data from OLTP DB to data mart (Online extraction). Following is the code of a trigger for a table involving clob column. I have seen different solut...
Categories: DBA Blogs

Create index CONCURRENTLY in PostgreSQL

Yann Neuhaus - Wed, 2017-11-22 12:10

In PostgreSQL when you create an index on a table, sessions that want to write to the table must wait until the index build completed by default. There is a way around that, though, and in this post we’ll look at how you can avoid that.

As usual we’ll start with a little table:

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 ( a int, b varchar(50));
insert into t1
select a.*, md5(a::varchar) from generate_series(1,5000000) a;
postgres=# \i a.sql
DROP TABLE
CREATE TABLE
INSERT 0 5000000

When you now create an index on that table and try to write the table at the same time from a different session that session will wait until the index is there (the screenshot shows the first session creating the index on the left and the second session doing the update on the right, which is waiting for the left one):
Selection_007

For production environments this not something you want to happen as this can block a lot of other sessions especially when the table in question is heavily used. You can avoid that by using “create index concurrently”.

Selection_008

Using that syntax writes to the table from other sessions will succeed while the index is being build. But, as clearly written in the documentation: The downside is that the table needs to be scanned twice, so more work needs to be done which means more resource usage on your server. Other points need to be considered as well. When, for whatever reason, you index build fails (e.g. by canceling the create index statement):

postgres=# create index concurrently i1 on t1(a);
^CCancel request sent
ERROR:  canceling statement due to user request

… you maybe would expect the index not to be there at all but this is not the case. When you try to create the index right after the canceled statement again you’ll hit this:

postgres=# create index concurrently i1 on t1(a);
ERROR:  relation "i1" already exists

This does not happen when you do not create the index concurrently:

postgres=# create index i1 on t1(a);
^CCancel request sent
ERROR:  canceling statement due to user request
postgres=# create index i1 on t1(a);
CREATE INDEX
postgres=# 

The questions is why this happens in the concurrent case but not in the “normal” case? The reason is simple: When you create an index the “normal” way the whole build is done in one transaction. Because of this the index does not exist when the transaction is aborted (the create index statement is canceled). When you build the index concurrently there are multiple transactions involved: “In a concurrent index build, the index is actually entered into the system catalogs in one transaction, then two table scans occur in two more transactions”. So in this case:

postgres=# create index concurrently i1 on t1(a);
ERROR:  relation "i1" already exists

… the index is already stored in the catalog:

postgres=# create index concurrently i1 on t1(a);
^CCancel request sent
ERROR:  canceling statement due to user request
postgres=# select relname,relkind,relfilenode from pg_class where relname = 'i1';
 relname | relkind | relfilenode 
---------+---------+-------------
 i1      | i       |       32926
(1 row)

If you don’t take care of that you will have invalid indexes in your database:

postgres=# \d t1
                        Table "public.t1"
 Column |         Type          | Collation | Nullable | Default 
--------+-----------------------+-----------+----------+---------
 a      | integer               |           |          | 
 b      | character varying(50) |           |          | 
Indexes:
    "i1" btree (a) INVALID

You might think that this does not harm, but then consider this case:

-- in session one build a unique index
postgres=# create unique index concurrently i1 on t1(a);
-- then in session two violate the uniqueness after some seconds
postgres=# update t1 set a = 5 where a = 4000000;
UPDATE 1
-- the create index statement will fail in the first session
postgres=# create unique index concurrently i1 on t1(a);
ERROR:  duplicate key value violates unique constraint "i1"
DETAIL:  Key (a)=(5) already exists.

This is even worse as the index now really consumes space on disk:

postgres=# select relpages from pg_class where relname = 'i1';
 relpages 
----------
    13713
(1 row)

The index is invalid, of course and will not be used by the planner:

postgres=# \d t1
                        Table "public.t1"
 Column |         Type          | Collation | Nullable | Default 
--------+-----------------------+-----------+----------+---------
 a      | integer               |           |          | 
 b      | character varying(50) |           |          | 
Indexes:
    "i1" UNIQUE, btree (a) INVALID

postgres=# explain select * from t1 where a = 12345;
                              QUERY PLAN                              
----------------------------------------------------------------------
 Gather  (cost=1000.00..82251.41 rows=1 width=37)
   Workers Planned: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..81251.31 rows=1 width=37)
         Filter: (a = 12345)
(4 rows)

But the index is still maintained:

postgres=# select relpages from pg_class where relname = 'i1';
 relpages 
----------
    13713
(1 row)
postgres=# insert into t1 select a.*, md5(a::varchar) from generate_series(5000001,6000000) a;
INSERT 0 1000000

postgres=# select relpages from pg_class where relname = 'i1';
 relpages 
----------
    16454
(1 row)

So now you have an index which can not be used to speed up queries (which is bad) but the index is still maintained when you write to the table (which is even worse because you consume resources for nothing). The only way out of this is to drop and re-create the index:

postgres=# drop index i1;
DROP INDEX
-- potentially clean up any rows that violate the constraint and then
postgres=# create unique index concurrently i1 on t1(a);
CREATE INDEX
postgres=# \d t1
                        Table "public.t1"
 Column |         Type          | Collation | Nullable | Default 
--------+-----------------------+-----------+----------+---------
 a      | integer               |           |          | 
 b      | character varying(50) |           |          | 
Indexes:
    "i1" UNIQUE, btree (a)

postgres=# explain select * from t1 where a = 12345;
                          QUERY PLAN                           
---------------------------------------------------------------
 Index Scan using i1 on t1  (cost=0.43..8.45 rows=1 width=122)
   Index Cond: (a = 12345)
(2 rows)

Remember: When a create index operations fails in concurrent mode make sure that you drop the index immediately.

One more thing to keep in mind: When you create an index concurrently and there is another session already modifying the data the create index command waits until that other operation completes:

-- first session inserts data without completing the transaction
postgres=# begin;
BEGIN
Time: 0.579 ms
postgres=# insert into t1 select a.*, md5(a::varchar) from generate_series(6000001,7000000) a;
INSERT 0 1000000
-- second sessions tries to build the index
postgres=# create unique index concurrently i1 on t1(a);

The create index operation will wait until that completes:

postgres=# select query,state,wait_event,wait_event_type from pg_stat_activity where state ='active';
                                query                                 | state  | wait_event | wait_event_t
----------------------------------------------------------------------+--------+------------+-------------
 create unique index concurrently i1 on t1(a);                        | active | virtualxid | Lock
 select query,state,wait_event,wait_event_type from pg_stat_activity; | active |            | 

… meaning when someone forgets to end the transaction the create index command will wait forever. There is the parameter idle_in_transaction_session_timeout which gives you more control on that but still you need to be aware what is happening here.

Happy index creation :)

 

Cet article Create index CONCURRENTLY in PostgreSQL est apparu en premier sur Blog dbi services.

DOAG2017 my impressions

Yann Neuhaus - Wed, 2017-11-22 11:28

As each year at end of November the biggest Oracle European conference takes place in Nürnberg, #DOAG2017. This year is a little bit special, because the DOAG celebrate the 30th edition of the conference.

2017_DOAG_Banner
dbi services is for the 5th time present with a booth and 8 sessions at the DOAG.IMG_3943
During the last 2 days I already followed many sessions, and I want to give you my impression and feedback’s about the market trends.
Tuesday morning as usual the conference started with a keynote, which is often not much interesting, because they only inform us, what was already communicated some weeks before at the Oracle Open Word conference. But this year it was not the case, I saw a very interesting session from Neil Sholay(Oracle) about technology and market shifts that will have an impact on our neat future. For example in the near future you running shoes will be directly made in the shop with a 3D printer, and your clothes will be directly made with a machine in the shop ,which is 17 time faster as clothes made by a men. 

After this nice introduction, I followed a very interesting session from Guido Schmutz(Trivadis) about Kafka with a very nice live demo, i like to see live demo but is something that I see less and less at the DOAG. At dbi services we try to have interesting live demo’s in each of our sessions.Later after a short break, I was very curious to see how many people will follow the session from Jan Karremans(EDB) about comparing Oracle to PostgreSQL, and as supposed the room was full. Therefore I can confirm the interest of seeing PostgreSQL sessions at the DOAG is very high. Because today most of the Oracle DBA beside their tasks, will also have to manage PostgreSQL databases.

IMG_3938
Today morning I followed a session from Mike Dietrich(Oracle)about the new Oracle database release model, as usual his session was very good with more hundred of participants.
The key word of the session, if you are still running Oracle database version 11.2.0.4, Mike advice to upgrade it very soon ! because begin of next year (5 weeks) you will  enter into the extended period with additional cost for the support.So last but not the least, this begin of afternoon I saw a session “Cloud provider battle” from Manfred Klimke(Trevisto). The interest for this session was also very high, because I suppose that most of the participants are not in the Cloud, and don’t know where they should go. During the session he presented a funny slide to resume the available Cloud service  with a pizza, and I can confirm it reflate the reality, “Dined at a restaurant” it the most expensive service.

pizza
As conclusion of this 2 days, all around Open Source is also a very important topic beside the Cloud at the DOAG, which also has presentations of the Oracle competitors.

 

Cet article DOAG2017 my impressions est apparu en premier sur Blog dbi services.

Introducing Data Hub Cloud Service to Manage Apache Cassandra and More

OTN TechBlog - Wed, 2017-11-22 11:00

Today we are introducing the general availability of the Oracle Data Hub Cloud Service. With Data Hub, developers are now able to initialize and run Apache Cassandra clusters on-demand without having to manage backups, patching and scaling for Cassandra clusters. Oracle Data Hub is a foundation for other databases like MongoDB, Postgres and more coming in the future. Read the full press release from OpenWorld 2017.

The Data Hub Cloud Service provides the following key benefits:

  • Dynamic Scalability – users will have access to an API and a web console interface to easily operate in minutes things such as scale-up/scale-down or scale-out/scale-in, and size their clusters accordingly to their needs.
  • Full Control –as development teams migrate from an on premise environment to the cloud, they continue to have full secure shell (ssh) access to the underlying virtual machines (VMs) hosting these database clusters so that they can login and perform management tasks in the same way they have been doing.

Developers may be looking for more than relational data management for their applications. MySQL and Oracle Database have been around for quite some time already on Oracle Cloud. Today, application developers are looking for the flexibility to choose the database technology according to the data models they use within their application. This use case specific approach enables these developers to choose the Oracle Database Cloud Service when appropriate and in other cases choose other database technologies such as MySQL, MongoDB, Redis, Apache Cassandra etc.

In such a polyglot development environment, the enterprise IT faces the key challenge of how to support as well as lower the total cost of ownership (TCO) of managing such open source database technologies within the organization. This is specifically the problem that the Oracle Data Hub Cloud Service addresses. How to Use Data Hub Cloud Service

Using the Data Hub Cloud Service to provision, administer or monitor an Apache Cassandra database cluster is extremely simple and easy. You can create an Apache Cassandra database cluster with as many nodes as you would like in 2 simple steps:

  • Step 1
    • Choose between Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic regions
    • Choose between the latest (3.11) and stable (3.10) Apache Cassandra database versions
  • Step 2
    • Choose the cluster size, compute shape (processor cores) and the storage size. Don't worry about choosing the right value here. You can always dynamically resize when you need additional compute power or storage.
    • Provide the shell access information so that you have the full control to your database clusters.

Flexibility to choose the Database Version

When you create the cluster, you have the flexibility to choose the Apache Cassandra versions. Additionally, you can easily patch to the latest version, as it becomes available for the Cassandra version. Once you choose to apply the patch, the service applies this patch within your cluster in a rolling fashion to minimize any downtime.

Dynamic Scaling

During provisioning, you have the flexibility to choose the cluster size, the compute shapes (compute core and memory), and the storage sizes for all the nodes within the cluster. This flexibility allows you to choose the compute and storage shapes that better meet your workload and performance requirements.
If you want to add either additional nodes in your cluster (commonly referred as scale-out) or additional storage to your nodes in the cluster, you can easily do so using the Data Hub Cloud Service API or Console. So, you don't have to worry about sizing your workload at the time of provisioning.

Full Control

You have full shell access to all the nodes within the cluster so that you have full control to the underlying database and its storage. You also have the full flexibility to login to these nodes and configure the database instances to meet your scalability and performance requirements.

Once you select Create, the service will create the compute instances, attach the block volumes to the node and then lay out the Apache Cassandra binaries within each of the nodes in the cluster. In the Oracle Cloud Infrastructure Classic platform, the service will also automatically enable the network access rules so that users can now begin to use CQL (Cassandra Query Language) tool to create your Cassandra database. In the Oracle Cloud Infrastructure platform, you have the full control and flexibility to create this cluster within a specific subnet in the virtual cloud network (VCN).

Getting Started

This service is accessible via the Oracle My Services dashboard for users already under the Universal Credits. And, if you're not already using the Oracle Cloud, you can start off with a free Cloud credits to explore the services. Appreciate if you can kindly give this service a spin and share your feedback.

Additional Reference

12c Multitenant Internals: compiling system package from PDB

Yann Neuhaus - Wed, 2017-11-22 07:38

DPKi1vxX0AAADLmWhen I explain the multitenant internals, I show that all metadata about system procedures and packages are stored only in CDB$ROOT and are accessed from the PDBs through metadata links. I take an example with DBMS_SYSTEM that has nothing in SOURCE$ of the PDB. But I show that we can compile it from the PDB. This is my way to prove that the session can access the system objects, internally switching the session to the root container when it needs to read SOURCE$. At DOAG Conference I had a very interesting question about what happens exactly in CDB$ROOT: Is the session really executing all the DML on the internal tables storing the compiled code of the procedure?

My first answer was something like ‘why not’ because the session in a PDB can switch and do modifications into CDB$ROOT internally. For example, even a local PDB DBA can change some ‘spfile’ parameters which are actually stored in the CDB$ROOT. But then I realized that the question goes further: is the PDB session really compiling the DBMS_SYSTEM package in the CDB$ROOT? Actually, there are some DDL that are transformed to ‘no-operation’ when executed on the PDB.

To see which ones are concerned, the best is to trace:

SQL> alter session set events='10046 trace name context forever, level 4';
Session altered.
SQL> alter session set container=PDB1;
Session altered.
SQL> alter package dbms_system compile;
Package altered.
SQL> alter session set events='10046 trace name context off';
Session altered.

I’ll not show the whole trace here. For sure I can see that the session switches to CDB$ROOT to read the source code of the package:

*** 2017-11-22T08:36:01.963680+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140650193204552 len=54 dep=1 uid=0 oct=3 lid=0 tim=5178881528 hv=696375357 ad='7bafeab8' sqlid='9gq78x8ns3q1x'
select source from source$ where obj#=:1 order by line
END OF STMT
PARSE #140650193204552:c=0,e=290,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=5178881527
EXEC #140650295606992:c=1000,e=287,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=813480514,tim=5178881999
FETCH #140650295606992:c=0,e=35,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=813480514,tim=5178882057
CLOSE #140650295606992:c=0,e=12,dep=2,type=3,tim=5178882104

That was my point about metadata links. But now about modifications.

As I need to see only the statements, I can use TKPROF to get them aggregated, but then the container switch – like (CDB$ROOT(1)) here – is ignored.

Here is a small AWK script I use to add the Container ID to the SQL ID so that it is visible and detailed into TKPROF output:

awk '/^[*]{3}/{con=$3}/^PARSING IN/{sub(/sqlid=./,"&"con" ")}{print > "con_"FILENAME }'

Then I run TKPROF on the resulting file, with ‘sort=(execu)’ so that I have the modifications (insert/delete/update) first. The result starts with something like this:

SQL ID: (PDB1(3)) 1gfaj4z5hn1kf Plan Hash: 1110520934
 
delete from dependency$
where
d_obj#=:1

I know that dependencies are replicated into all containers (because table metadata is replicated into all containers) so I see following tables modified in the PDB: DEPENDENCY$, ACCESS$, DIANA_VERSION$, and of course OBJ$.

But to answer the initial question, there are no modifications done in the CDB$ROOT. Only SELECT statements there, on SOURCE$, SETTINGS$, CODEAUTH$, WARNING_SETTINGS$

So, probably, the updates have been transformed to no-op operations once the session is aware that the source is the same (same signature) and it just reads the compilation status.

Just as a comparison, tracing the same compilation when done on the CDB$ROOT will show inserts/delete/update on ARGUMENT$, PROCEDUREINFO$, SETTINGS$, PROCEDUREPLSQL$, IDL_UB1$, IDL_SB4$, IDL_UB2$, IDL_CHAR$, … all those tables sorting the compiled code.

So basically, when running DDL on metadata links in a PDB, not all the work is done in the CDB, especially not writing again what is already there (because you always upgrade the CDB$ROOT first). However, up to 12.2 we don’t see a big difference in time. This should change in 18c where the set of DDL to be run on the PDB will be pre-processed to avoid unnecessary operations.

 

Cet article 12c Multitenant Internals: compiling system package from PDB est apparu en premier sur Blog dbi services.

Query Flat Files in S3 with Amazon Athena

Pakistan's First Oracle Blog - Tue, 2017-11-21 21:01
Amazon Athena enables you to access data present in flat files stored in S3 (Simple Storage Service) as if it were in a table in the database. That and you don't have to set up any server or any other software to accomplish that.

That's another glowing example of being 'Serverless.'


So if a telecommunication has hundreds of thousands or more call detail record file in CSV or Apache Parquet or any other supported format, it can just be uploaded to S3 bucket, and then by using AWS Athena, that CDR data can be queried using well known ANSI SQL.

Ease of use, performance, and cost savings are few of the benefits of AWS Athena service. True to the Cloud promise, with Athena you are charged for what you actually do; i.e. you are only charged for the queries. You are charged $5 per terabyte scanned by your queries. Beyond S3 there are no additional storage costs.

So if you have huge amount of formatted data in files and all you want to do is to query that data using familiar ANSI SQL then AWS Athena is the way to go. Beware that Athena is not for enterprise reporting and business intelligence. For that purpose we have AWS Redshift. Athena is also not for running highly distributed processing frameworks such as Hadoop. For that purpose we have AWS EMR. Athena is more suitable for running interactive queries on your supported formatted data in S3.

Remember to keep reading the AWS Athena documentation as it will keep improving, lifting limitations, and changing like everything else in the cloud.
Categories: DBA Blogs

RMAN and archivelogs

Tom Kyte - Tue, 2017-11-21 18:26
Hi, I have read quite a bit on Oracles RMAN utility and know that for hot backups RMAN doesn't use old method of placing tablespaces in Archive log mode freezing datafile headers & writing changes to Redo/ Archive logs. Hence a company with a larg...
Categories: DBA Blogs

Difference between "consistent gets direct" and "physical reads direct"

Tom Kyte - Tue, 2017-11-21 18:26
Hi Tom/Team, Could you explain the difference between "consistent gets direct" and "physical reads direct"? Thanks & Regards
Categories: DBA Blogs

Linuxgiving! The Things We do With and For Oracle Linux

OTN TechBlog - Tue, 2017-11-21 17:00

By: Sergio Leunissen - VP, Operating Systems & Virtualization 

It is almost Thanksgiving, so you may be thinking about things that you’re thankful for –good food, family and friends.  When it comes to making your (an enterprise software developer’s) work life better, your list might include Docker, Kubernetes, VirtualBox and GitHub. I’ll bet Oracle Linux wasn’t on your list, but here’s why it should be…

As enterprises move to the Cloud and DevOps increases in importance, application development also has to move faster. Here’s where Oracle Linux comes in. Not only is Oracle Linux free to download and use, but it also comes pre-configured with access to our Oracle Linux yum server with tons of extra packages to address your development cravings, including:

If you’re still craving something sweet, you can add less complexity to your list as with Oracle Linux you’ll have the advantage of runningthe exact same OS and version in development as you do in production (on-premises or in the cloud).

Related content

And, we’re constantly working on ways to spice-up your experience with Linux, from things as simple as "make it boot faster," to always-available diagnostics for network filesystem mounts, to ways large systems can efficiently parallelize tasks. These posts, from members of the Oracle Linux Kernel Development team, will show you how we are doing this:

Accelerating Linux Boot Time

Pasha Tatashin describes optimizations to the kernel to speed up booting Linux, especially on large systems with many cores and large memory sizes.

Tracing NFS: Beyond tcpdump

Chuck Leverdescribes how we are investigating new ways to trace NFS client operations under heavy load and on high performance network fabrics so that system administrators can better observe and troubleshoot this network file system.

ktask: A Generic Framework for Parallelizing CPU-Intensive Work

Daniel Jordan describes a framework that’s been submitted to the Linux community which makes better use of available system resources to perform large scale housekeeping tasks initiated by the kernel or through system calls.

On top of this, you can have your pumpkin, apple or whatever pie you like and eat it too – since Oracle Linux Premier Support is included with your Oracle Cloud Infrastructure subscription – yes, that includes Ksplice zero down-time updates and much more at no additional cost.

Most everyone's business runs on Linux now, it's at the core of today’s cloud computing. There are still areas to improve, but if you look closely, Oracle Linux is the OS you’ll want for app/dev in your enterprise.

Partner Webcast – Identity Management Update: IDM 12c Release

Oracle Identity Management, a well-recognized offering by Oracle, enables organizations to effectively manage the end-to-end lifecycle of user identities across all enterprise resources, both within...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Scaling Oracle using NVMe flash

Gerger Consulting - Tue, 2017-11-21 07:37
Attend the free webinar by storage expert Venky Nagapudi and learn how to improve the performance of your Oracle Database using new storage technologies such as NVMe flash. 

About the Webinar
Growth in users and data put an ever-increasing strain on transactional and analytics platforms. With many options available to scale platforms, what are the considerations and what are others choosing? Vexata’s VP of Product Management, Venky Nagapudi covers how the latest in storage side technologies, like NVMe flash, can deliver both vast improvements in performance as well as drive down costs and complexity of platforms. He will also cover key use cases where storage-side solutions delivered amazing results for Vexata’s customers.
In this webinar, you will:
  • Hear real-world performance scaling use cases.
  • Review the pros & cons of common scaling options.
  • See specific results of choosing a storage-side solution.


About the Presenter


Venky Nagapudi has 20 years experience in engineering and product management in the storage, networking and computer industries. Venky led product management at EMC and Applied Microsystems. Venky also held engineering leadership roles at Intel, Brocade and holds 10 patents with an MBA from Haas business school at UC Berkeley, an MSEE from North Carolina State University, and a BSEE from IIT Madras.

Sign up now.

Categories: Development

Partner Webcast – Recovery Appliance (ZDLRA) - Data protection for Oracle Database

Today’s solutions for protecting business data fail to meet the needs of mission critical enterprise databases. They lose up to a day of business data on every restore, place a heavy load on...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Import data from Flat File to two different Table using UTL_FILE.

Tom Kyte - Tue, 2017-11-21 00:06
Hi Please help this Question. Import data from Following Flat File to two different Table using UTL_FILE. a. EMP and b. DEPT Note --- 1. In Last Line NULL Employee Should not Entry into Table. 2. Deptno Should go to both the Table EMP a...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator