Feed aggregator

Flashback Archive Internal History table is not compressed -- Kindly suggest

Tom Kyte - Tue, 2016-12-06 05:06
Hi Tom, Once flashback archive is enabled for a table, a corresponding history table will be created by oracle internally. It is automatically partitioned and compressed as well. But when I have enabled FBA for a table, the history table is partit...
Categories: DBA Blogs

Column with default value

Tom Kyte - Tue, 2016-12-06 05:06
I have a table in which one of the column is as below create_date TIMESTAMP(6) default SYSDATE, When the record is inserted from the GUI(URL) , it doesn't insert any value for create_Date .. But when inserted from other than URL i...
Categories: DBA Blogs

Redo Log 4K Blocksize

Tom Kyte - Tue, 2016-12-06 05:06
Good Evening, In 11g, I've read about the possibility of setting redo logs to have a blocksize of 4k. Supposedly, the blocksize is automatically set based on the block sector of the disk. Supposedly, high capacity disks have block sectors of 4k....
Categories: DBA Blogs

SQL profile is usable?

Tom Kyte - Tue, 2016-12-06 05:06
Hi,guy! I have some question about the SQL PROFILE, when use SQL PROFILE to bind the SQL,it performanced good,but as time goes by,the data in the table will grow rapidly and the SQL which use SQL PROFILE performanced bad. So what's the meaning of th...
Categories: DBA Blogs

db2 Query in Unix: load from /dev/null of del replace into Schema.Tablename nonrecoverable;

Tom Kyte - Tue, 2016-12-06 05:06
Hi Tom, In Unix db2 I am using the below query to clear the table. db2 Query in Unix: load from /dev/null of del replace into Schema.Tablename nonrecoverable; What would be the best approach to do the same thing in Oracle. I don't thin...
Categories: DBA Blogs

How mandatory is to use DBMS_AUDIT_MGMT

Tom Kyte - Tue, 2016-12-06 05:06
Hello everyone, One question please, how mandatory is to use the package DBMS_AUDIT_MGMT for Oracle. a)There is a NOT ADVISABLE suggestion from Oracle to work in aud$ table directly, to force to use ALWAYS the package. b) is simple we ADVISE ...
Categories: DBA Blogs

Latch Free

Tom Kyte - Tue, 2016-12-06 05:06
Hi Connor, I have no intention of complaining. But all over the web I find lot of discussion about latch , latch spin, and latch sleep. And the description goes like below. 1] Try to acquire a latch 2] Failed ! Try again after sometime 3] Ret...
Categories: DBA Blogs

How to reduce Buffer Busy Waits with Hash Partitioned Tables in #Oracle

The Oracle Instructor - Tue, 2016-12-06 04:57

fight_contention_2

Large OLTP sites may suffer from Buffer Busy Waits. Hash Partitioning is one way to reduce it on both, Indexes and Tables. My last post demonstrated that for Indexes, now let’s see how it looks like with Tables. Initially there is a normal table that is not yet hash partitioned. If many sessions do insert now simultaneously, the problem shows:

Contention with a heap table

Contention with a heap table

The last extent becomes a hot spot; all inserts go there and only a limited number of blocks is available. Therefore we will see Buffer Busy Waits. The playground:

SQL> create table t (id number, sometext varchar2(50));

Table created.

create sequence id_seq;

Sequence created.

create or replace procedure manyinserts as
begin
 for i in 1..10000 loop
  insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?');
 end loop;
 commit;
end;
/

Procedure created.

create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..100 LOOP
 dbms_job.submit(v_jobno,'manyinserts;', sysdate);
END LOOP;
commit;
end;
/

Procedure created.

The procedure manysessions is the way how I simulate OLTP end user activity on my demo system. Calling it leads to 100 job sessions. Each does 10.000 inserts:

SQL> exec manysessions

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T			    2985

So we got thousands of Buffer Busy Waits that way. Now the remedy:

SQL> drop table t purge;

Table dropped.

SQL> create table t (id number, sometext varchar2(50))
     partition by hash (id) partitions 32;

Table created.

 
SQL> alter procedure manyinserts compile;

Procedure altered.

SQL> alter procedure manysessions compile;

Procedure altered.

SQL> exec manysessions 

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';  

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T	   SYS_P249	       0
T	   SYS_P250	       1
T	   SYS_P251	       0
T	   SYS_P252	       0
T	   SYS_P253	       0
T	   SYS_P254	       0
T	   SYS_P255	       0
T	   SYS_P256	       1
T	   SYS_P257	       0
T	   SYS_P258	       0
T	   SYS_P259	       1
T	   SYS_P260	       0
T	   SYS_P261	       0
T	   SYS_P262	       0
T	   SYS_P263	       0
T	   SYS_P264	       1
T	   SYS_P265	       1
T	   SYS_P266	       0
T	   SYS_P267	       0
T	   SYS_P268	       0
T	   SYS_P269	       0
T	   SYS_P270	       0
T	   SYS_P271	       1
T	   SYS_P272	       0
T	   SYS_P273	       0
T	   SYS_P274	       0
T	   SYS_P275	       1
T	   SYS_P276	       0
T	   SYS_P277	       0
T	   SYS_P278	       0
T	   SYS_P279	       2
T	   SYS_P280	       0

32 rows selected.

SQL> select sum(value) from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

SUM(VALUE)
----------
	 9

SQL> select 2985-9 as waits_gone from dual;

WAITS_GONE
----------
      2976

The hot spot is gone:

hash_part_table

This emphasizes again that Partitioning is not only for the Data Warehouse. Hash Partitioning in particular can be used to fight contention in OLTP environments.


Tagged: partitioning, Performance Tuning
Categories: DBA Blogs

Troubleshooting RAC Cloning Issues in EBS 12.1

Steven Chan - Tue, 2016-12-06 02:09

We have a white paper that outlines a framework for troubleshooting Rapid Clone issues:

Cloning EBS 12.1.3

Things can get more-involved if your E-Business Suite environment uses Real Application Clusters (RAC). There's another white paper that has more details about what to do if you encounter problems when cloning RAC-based EBS environments:

This whitepaper has additional RAC-specific information, such as:

  • Pairsfile.txt
  • RMAN restore phases
  • Secondary node context file creation
  • Known issues (e.g. RAC to RAC cloning causing inventory registration issues)

Related Articles

Categories: APPS Blogs

12.2 Index Advanced Compression “High” – Part I (High Hopes)

Richard Foote - Mon, 2016-12-05 23:52
Oracle first introduced Advanced Compression for Indexes in 12.1 as I’ve discussed here a number of times. With Oracle Database 12c Release 2, you can now use Index Advanced Compression “High” to further (and potentially dramatically) improve the index compression ratio.  Instead of simply de-duplicating the index entries within an index leaf block, High Index […]
Categories: DBA Blogs

Overload Protection Support

Anthony Shorten - Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

ILM Planning - The First Steps

Anthony Shorten - Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

UKOUG 2016 – Second day

Yann Neuhaus - Mon, 2016-12-05 13:51

IMG_1965

This second day at UKOUG was quite good. I slept well at the Jurys Inn hotel and this morning, I enjoyed one more time a real English breakfast with beans, bacons, eggs and sausages. I like that to be fit over all the day ;)

Today, I attended the general Keynote and several sessions around integration, APEX & Database Development and Database. My colleague, Franck Pachot also presented today and I attended his session “12c Multitenant: Not a Revolution, Just an Evolution”. His session reminds me the article I wrote some years ago about Oracle Multitenant architecture and APEX.

Early in the morning, I followed the “Application Container Cloud Service: Backend Integration Using Node.js”. The presenter described what Node.js is, give javascript framework that can be easily integrated with Node.js such as Express.js to create HTTP server and retrieve Node.js data by creating HTTP server. He also presented the architecture where we can have Node.js hosted in Docker on the cloud.

After that, I attended the session “APEX Version Control & Team Working”. During that session, I learned more on Apex Version Control best practices and which nice commands can be done through SQL cli, apex java utility and so on. I was quite happy learning that for internal development we were not so bad and we already properly control version, make backup of APEX workspace, applications and themes. I now have information to improve our internal works around APEX development activities such as APEX ATAF “Apex Test Automation Framework”

Next session was “Interactive Grids in Application Express 5.1″. This session was a demonstration oriented session in which the presenter showed us new amazing features that will be incorporated in APEX 5.1. Most of the demonstration was based on the sample package application.

The next session was “Real Time Vehicle Tracking with APEX5″. For me it was great to see the power of Apex and the Oracle Database to store and display data in real time through the APEX5 MapViewer. The application uses Oracle Spatial getting data from each vehicle GPS where PL/SQL converts data for geospatial information.

During the last session, “A RESTful MicroService for JSON Processing in the Database” I learned how to execute JavaScript directly from the database. In fact, with Java 8 and the Nashhorn project it’s now possible to execute JavaScript codes from the JVM and so directly in the database avoiding data shipping.

This is all for today and see you tomorrow, we will now take time with my blog reviewer to drink some pints in an English pub.

 

Cet article UKOUG 2016 – Second day est apparu en premier sur Blog dbi services.

SQL Server 2016: distributed availability groups and cascaded replicas

Yann Neuhaus - Mon, 2016-12-05 12:28

During the last MVP summit, we had some interesting discussions about availability groups with the SQL Server team and I remember someone asked for managing scenarios like Oracle cascaded destinations and the good news is that SQL Server 2016 already addresses this kind of scenario with distributed availability groups. For example, let’s say you have to manage heavily reporting activity on your application and a solution would be to offload this activity across several secondary read-only replicas. So, a typical architecture as follows:

blog 110 - 0 - initial infra AG

We basically want to achieve high availability on the primary datacenter (DATACENTER1) and to use the secondary datacenter as DR and at the same time to offload reporting activity on secondary replicas. But let’s say you get a low network bandwidth – (WAN classified with ~= 150 / 200 mbps) between your two datacenters which are geographically dispersed from each other. Regarding your current workload against the availability group, we may potentially experience high network traffic when the number of secondary replicas increases on the DR site. Indeed, the number of log blocks to replicate is directly proportional to the number of secondary replicas for the same payload.

I decided to simulate this scenario on my lab environment which reflects the above scenario (2 replicas on the first datacenter and four other replicas on the second datacenter). I used two Lenovo T530 laptop with Hyper-V to simulate the whole environment with a cross datacenter network connection handled by two RRAS servers.

In addition, for a sake of precision, let’s describe the test protocol:

  • I used a script which inserts a bunch of data from the primary replica (~ 900MB of data)
  • I ran the same workload test after adding one asynchronous read-only replica at time on each test up to 4 replicas.
  • I collected performance data from various perfmon counters focused on the availability group network stack (both primary site and DR site)

Here the output of the whole test.

blog 110 - 1 - network usage trend AG

The picture above is pretty clear here. We notice the network bandwidth grows up when adding secondary replicas. In the last test, the network bandwidth reached 400 Mbps (received traffic) on the remote datacenter while that reached for primary replica 600 Mbps (send traffic). Why have we got a difference between network bandwidth consumption between the primary replica and remote datacenter? Well, the answer is simple: network bandwidth consumption on remote datacenter doesn’t include network traffic from the secondary located on the first datacenter for high-availability.

We may also notice the third iteration of the test (1 primary + 1 secondary sync + 2 secondaries async) is showing up a critical point if we have to face a scenario that includes a WAN connection between the two datacenters with a maximum network bandwidth of 200 Mbps. Indeed in this case, the network bandwidth could be quickly saturated by the replication traffic between all the replicas and here probably the first symptoms you may encountered in this case:

 blog 110 - 3 - AG DMV monitoring

A continuous high log send queue size for each concerned secondary replica on the remote datacenter (250 MB on average in my case)…

blog 110 - 3 - AG wait stats

You may minimize the network overhead by isolating the replication traffic to its own network but in some cases if you’re unlucky it will not be enough. This is a situation which may be solved by introducing distributed availability groups and the cascaded destinations principle as shown below:

blog 110 - 4 - distributed infra AG

Distributed availability group feature will permit to offload the replication traffic from the primary to the read-only secondaries by using a replica on the second datacenter. Thus, we are able to reduce drastically the network bandwidth from 4 replicas to only one. In addition, adding one or several other replicas may be considered because this new architecture is more scalable and we will only impact local network bandwidth on the second datacenter.

Here my new lab configuration after applying distributed availability groups on the previous architecture.

  • In the first datacenter, one availability group AdvGrp that includes two replicas in synchronous replication and automatic failover for HA purpose
  • In the second datacenter, one availability AdvGrpDR that includes four replicas enrolled as read-only.
  • One distributed availability group AdvDistGrp which makes the cascade between the two aforementioned availability groups

blog 110 - 5 - distributed cfg AG

Let’s run the same workload test on the new architecture and let’s have a look at the new output:

The log send queue size got back to normal at the primary replica level on the first datacenter by cascading all the previous replication traffic from the primary replica located to the second datacenter (AdvGrpDR availability group).

blog 110 - 6 - AG DMV monitoring distrib

From a wait statistics perspective, we got rid of HADR_DATABASE_FLOW_CONTROL meaning we did not saturated the network link between the 2 datacenters

blog 110 - 7 - AG wait stats distributed

The picture below confirms the replication traffic dropped drastically with this new configuration (150 Mbps vs 400 Mbps from the first architecture).

blog 110 - 8 - perfmon AG distributed

Bottom line

In this blog post I tried to demonstrate using distributed availability groups to cascade the replication traffic to another replica may be a good idea in order to address scenarios which include many secondary replicas on a remote location with a low network bandwidth. However introducing distributed availability groups has a cost in terms of management because we have to deal with an additional layer of complexity. But if the rewards make the effort worthwhile we should consider this kind of architecture.

 

 

 

 

 

Cet article SQL Server 2016: distributed availability groups and cascaded replicas est apparu en premier sur Blog dbi services.

Simple CRUD Implementation with Oracle JET - Part III

Andrejus Baranovski - Mon, 2016-12-05 12:09
This is third and last part in JET CRUD series. I already had a post about JET CRUD few months ago - Very Practical CRUD with JET and ADF BC - POST and DELETE Methods. But current approach is more simple and better structured. I will go through CREATE, UPDATE and DELETE operations.

You can download or view source code (JET app and ADF BC REST app) directly in GitHub  repository - jetcrud.

CREATE

We need to initalize new model when creating new row, this is done in addCustomer.js - empty model is initialized. Model is initialized from common module definition - customController (read more about it in my previous post - Better Oracle JET Code Structuring with Your Own Modules - Part II):


Important trick which will save lines of code - you can reference model attributes directly, no need to list all of them in advance. Data bind form component to JET model variable:


Each input could point to the attribute, this is UI will get/set value to View Model:


New model must be posted through JET model API create operation. Here we can convert model to JSON and use it as parameter for create:


This is how it looks on UI - form with new customer data. Press Save button:


Behind the scenes REST POST method will be executed and data will be posted to the backend, where it will be processed and inserted into DB:


New row is saved and visible in the results table:


UPDATE

Edit form implementation is very similar to create, actually it can be even combined into one. The only difference is how you initialize current row for editing (check - Oracle JET CRUD - Search and Edit Form - Part I). And how you save the changes - must use different method from Oracle JET model API - save. The rest is very similar to create operation handling:


Edit form UI - it renders different set of fields:


Behind the scenes it executes REST PATCH method, which will update only changed attributes:


DELETE

This is the simplest operation from all three. You should get instance of row to delete. This can be done through JavaScript promise, JET would not return actual model instantly. It return promise and you get model asynchronously. Call JET model API destroy operation to remove row entry:


Row is selected and removed:


Behind the scenes it calls REST DELETE method and removes row from backend:


Read previous posts:

1. Oracle JET CRUD - Search and Edit Form - Part I
2. Better Oracle JET Code Structuring with Your Own Modules - Part II

UKOUG 2016 Day 2

Yann Neuhaus - Mon, 2016-12-05 12:07

uk2

Today I assisted at a first session about one of my favorite tool: Upgrade to EM 13c now. The session was presented by Phil Gric from Red Stack Tech.

At the begining he described us the most common mistakes while implementing Enterprise Manager:

- EM 13c is an enterprise application

- It is a critical part of your infrastructure

- it is designed to help you

- EM 13c is not a glorified db console

- IT manager should not see EM as a job for DBA

He described us the main pre requisites before to realize an EM 13c upgrade ( for example disable optimizer_adaptive_features). He also talked about isssues such as the upgrade will create users with the sysman password, we should ensure that the repository password policy accept such a password.

There is also an issue while upgrading agent on AIX to 13.2 version. There is a problem securing the agent due to SHA encryption (Metalink Note 1965676.1).

To complete his presentation, he described us the main new features in EM 13c: export and import of incident rules, incident compression, always on monitoring, in emcli more than 300 new verbs and a general functionnality improved, system broadcast , comparaison and drift management.

He finally explained us why for him it is important to regularly upgrade to the last EM13c version: it is easy to upgrade, and the longer you wait, the closer it is to the next upgrade :=))

The second presentation was about the 12c upgrade : the good , the bad and the ugly presented by Niall Litchfield. He talked about his experiences about upgrading to 12c a very huge infrastructure composed of more than 100 servers, with database version from 10.1 to 11.2.0.3, with RAC or single instances.

His first advice was to read the Mike Dietrich documentation (Update, Migrate , Consolidate to 12c), and to have a look at the Oracle recommanded patch list.

A good reason to upgrade is because the support for 11g ends at teh end of the year, and the extended support is expensive.

The good news after this huge upgrade was that there has been no upgrade failures (tens of clusters, hundreds of servers and databases), a performance benchmark showed a 50 % improvement.

The bad and ugly news concern the number of patches. It also concern the JSON bundle patches which require database bundle patches. He also adviced us to turn off the optimizer_adaptive_features (recommanded also to be disabled with EM13c, PeopleSoft and EBS). Finally a last ugly point is the documentation, there is no one place to read the documenation but many. He also recommended to allow significant time for testing the database and the applications after the upgrade to 12c.

Then I assisted at a session talking about Oracle database 12c on Windows animated by Christian Shay of Oracle.

He showed us the database certification on 64-bit Windows. In a short resume Oracle 12..2 is certified on Windows server 2012, Windows Server 2012 R2, Windows 10 and Windows Server 2016, as Oracle 12.1 is certified on the same servers except Windows Server 2016.

In Windows 8 and Windows Server 2012, Microsoft has introduced the Group Managed service Account (GMSA), i.e. a domain level account which can be used by multiple servers in that domain to run their services under this account. A GMSA can be the Oracle Home user for Oracle Database Real Application Clusters (Oracle RAC), single instance, and client installations. It has similarities with the ‘oracle’ user on Linux, as you are able to connect on windows with this user and perform administrative tasks  like create database, install Oracle or upgrade databases.

In Windows 7 and Windows Server 2008 R2, Microsoft introduced virtual accounts. A virtual account can be the Oracle home user for Oracle Database single instance and client installations.

The recommandations are the following: for DB server (single instance) use virtual account to avoid password management (12.2), for 12.1 specify a Windows user account during installation. For RAC DB and Grid infrastructure, use a domain user or group managed service account, for a GMSA you do not need to provide the password for any database operation.

He also talked about large page support for windows. When large page support is enabled, the CU are able to access the Oracle database buffers im RAM more quickly. It will address the buffers in 2 MB page size instead of 4 KB increments.

Large pages can be used in two modes : Regular or Mixed mode. The regular one means all the SGA is attempted to be allocated in large pages. By the way if the amount of large pages is not available the database will not come up. Thats the reason using the mixed mode is perhaps better, if all the SGA cannot be allocated in large pages, the rest of the pages will be allocated with regular pages and the instance will come up.

I finished my UKOUG day by assisting at Franck Pachot’s session talking about 12c Mutltitenant (not a revolution but an evolution). He clearly explained us that we did not have to fear about 12c mutlitenant, from the begining of Oracle there has been a lot of new features a lot people feared, but now they are impelemented and work correctly. By the way the patch upgrade optimization is partially implemented, we will see how 12c multitenant will evolve in the next years.

 

 

 

 

 

Cet article UKOUG 2016 Day 2 est apparu en premier sur Blog dbi services.

About Joins

Tom Kyte - Mon, 2016-12-05 10:46
Hello I am facing problem with alias name in the joins. Pleae help me out. Below is the example select y.employee_id from ((select employee_id,department_id,manager_id from employees)a join (select department_id,manager_id from departme...
Categories: DBA Blogs

Dynamic Cursors

Tom Kyte - Mon, 2016-12-05 10:46
i have to write a procedure where the procedure takes a parameter say employee_status(Retired/Not-Retired) based on the parameter i've to use a cursor select employees from mytable who are retired or not retired and loop over all the employees (Ret...
Categories: DBA Blogs

Import Data from Excel to DB Table

Tom Kyte - Mon, 2016-12-05 10:46
Hi I need all-time executable SP that can import data from an Excel and do some DML operation on live tables. First of all, is that possible? IF so 1. what are the many ways that we can do that? 2. How do we do that using a Stored Procedu...
Categories: DBA Blogs

Introducing On Demand Training from Rittman Mead

Rittman Mead Consulting - Mon, 2016-12-05 08:00

Rittman Mead is happy to announce that its much anticipated On Demand Training (ODT) service is live, giving people the opportunity to receive expertly written & delivered self-paced training courses that can be accessed online anywhere, at anytime.

We have been delivering technical & end-user courses based on Oracle Analytics products all over the world for the past decade.

While our classroom sessions continue to offer an unrivalled experience for our trainees, we understand that in the modern era, flexibility is important.

ODT has been built based on the feedback of our clients and network so that you can:

  • Experience our training regardless of the distance to travel

  • Keep your member’s of staff on site at crucial periods in your company’s calendar

  • Give participants the ability to reinforce the lessons they’ve learnt afterwards

Learn

Use Rittman Meads LMS as your virtual classroom to access all course materials, lesson demos and slides

Practice

Get hands on with your very own cloud based training environment

Engage

Submit questions to Rittman Meads Principal Trainer network on subjects that might be specific to your everyday use of the product

Each course provides 30 days access to the above, ensuring you have enough time to learn at your pace and re-enforce each lesson.

We’re feeling particularly seasonal down here in Brighton, so to celebrate the launch of our platform we’re offering a 40% discount on our first live course OBIEE 12c Front End Development & Data Visualization between now and January 31st.

Simply use the discount code RMODT12C on checkout to take advantage of this exclusive offer.

For more details and to start your On Demand learning experience with Rittman Mead please check out:

  • Our webpage where you can find out more information about ODT and register an account for our LMS
  • The Rittman Mead LMS where you can view our course catalog and purchase courses

You can also contact training@rittmanmead.com if you have any questions about the service.

Happy Learning!!!

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator