Feed aggregator

ILM Planning - The First Steps

Anthony Shorten - Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

UKOUG 2016 – Second day

Yann Neuhaus - Mon, 2016-12-05 13:51

IMG_1965

This second day at UKOUG was quite good. I slept well at the Jurys Inn hotel and this morning, I enjoyed one more time a real English breakfast with beans, bacons, eggs and sausages. I like that to be fit over all the day ;)

Today, I attended the general Keynote and several sessions around integration, APEX & Database Development and Database. My colleague, Franck Pachot also presented today and I attended his session “12c Multitenant: Not a Revolution, Just an Evolution”. His session reminds me the article I wrote some years ago about Oracle Multitenant architecture and APEX.

Early in the morning, I followed the “Application Container Cloud Service: Backend Integration Using Node.js”. The presenter described what Node.js is, give javascript framework that can be easily integrated with Node.js such as Express.js to create HTTP server and retrieve Node.js data by creating HTTP server. He also presented the architecture where we can have Node.js hosted in Docker on the cloud.

After that, I attended the session “APEX Version Control & Team Working”. During that session, I learned more on Apex Version Control best practices and which nice commands can be done through SQL cli, apex java utility and so on. I was quite happy learning that for internal development we were not so bad and we already properly control version, make backup of APEX workspace, applications and themes. I now have information to improve our internal works around APEX development activities such as APEX ATAF “Apex Test Automation Framework”

Next session was “Interactive Grids in Application Express 5.1″. This session was a demonstration oriented session in which the presenter showed us new amazing features that will be incorporated in APEX 5.1. Most of the demonstration was based on the sample package application.

The next session was “Real Time Vehicle Tracking with APEX5″. For me it was great to see the power of Apex and the Oracle Database to store and display data in real time through the APEX5 MapViewer. The application uses Oracle Spatial getting data from each vehicle GPS where PL/SQL converts data for geospatial information.

During the last session, “A RESTful MicroService for JSON Processing in the Database” I learned how to execute JavaScript directly from the database. In fact, with Java 8 and the Nashhorn project it’s now possible to execute JavaScript codes from the JVM and so directly in the database avoiding data shipping.

This is all for today and see you tomorrow, we will now take time with my blog reviewer to drink some pints in an English pub.

 

Cet article UKOUG 2016 – Second day est apparu en premier sur Blog dbi services.

SQL Server 2016: distributed availability groups and cascaded replicas

Yann Neuhaus - Mon, 2016-12-05 12:28

During the last MVP summit, we had some interesting discussions about availability groups with the SQL Server team and I remember someone asked for managing scenarios like Oracle cascaded destinations and the good news is that SQL Server 2016 already addresses this kind of scenario with distributed availability groups. For example, let’s say you have to manage heavily reporting activity on your application and a solution would be to offload this activity across several secondary read-only replicas. So, a typical architecture as follows:

blog 110 - 0 - initial infra AG

We basically want to achieve high availability on the primary datacenter (DATACENTER1) and to use the secondary datacenter as DR and at the same time to offload reporting activity on secondary replicas. But let’s say you get a low network bandwidth – (WAN classified with ~= 150 / 200 mbps) between your two datacenters which are geographically dispersed from each other. Regarding your current workload against the availability group, we may potentially experience high network traffic when the number of secondary replicas increases on the DR site. Indeed, the number of log blocks to replicate is directly proportional to the number of secondary replicas for the same payload.

I decided to simulate this scenario on my lab environment which reflects the above scenario (2 replicas on the first datacenter and four other replicas on the second datacenter). I used two Lenovo T530 laptop with Hyper-V to simulate the whole environment with a cross datacenter network connection handled by two RRAS servers.

In addition, for a sake of precision, let’s describe the test protocol:

  • I used a script which inserts a bunch of data from the primary replica (~ 900MB of data)
  • I ran the same workload test after adding one asynchronous read-only replica at time on each test up to 4 replicas.
  • I collected performance data from various perfmon counters focused on the availability group network stack (both primary site and DR site)

Here the output of the whole test.

blog 110 - 1 - network usage trend AG

The picture above is pretty clear here. We notice the network bandwidth grows up when adding secondary replicas. In the last test, the network bandwidth reached 400 Mbps (received traffic) on the remote datacenter while that reached for primary replica 600 Mbps (send traffic). Why have we got a difference between network bandwidth consumption between the primary replica and remote datacenter? Well, the answer is simple: network bandwidth consumption on remote datacenter doesn’t include network traffic from the secondary located on the first datacenter for high-availability.

We may also notice the third iteration of the test (1 primary + 1 secondary sync + 2 secondaries async) is showing up a critical point if we have to face a scenario that includes a WAN connection between the two datacenters with a maximum network bandwidth of 200 Mbps. Indeed in this case, the network bandwidth could be quickly saturated by the replication traffic between all the replicas and here probably the first symptoms you may encountered in this case:

 blog 110 - 3 - AG DMV monitoring

A continuous high log send queue size for each concerned secondary replica on the remote datacenter (250 MB on average in my case)…

blog 110 - 3 - AG wait stats

You may minimize the network overhead by isolating the replication traffic to its own network but in some cases if you’re unlucky it will not be enough. This is a situation which may be solved by introducing distributed availability groups and the cascaded destinations principle as shown below:

blog 110 - 4 - distributed infra AG

Distributed availability group feature will permit to offload the replication traffic from the primary to the read-only secondaries by using a replica on the second datacenter. Thus, we are able to reduce drastically the network bandwidth from 4 replicas to only one. In addition, adding one or several other replicas may be considered because this new architecture is more scalable and we will only impact local network bandwidth on the second datacenter.

Here my new lab configuration after applying distributed availability groups on the previous architecture.

  • In the first datacenter, one availability group AdvGrp that includes two replicas in synchronous replication and automatic failover for HA purpose
  • In the second datacenter, one availability AdvGrpDR that includes four replicas enrolled as read-only.
  • One distributed availability group AdvDistGrp which makes the cascade between the two aforementioned availability groups

blog 110 - 5 - distributed cfg AG

Let’s run the same workload test on the new architecture and let’s have a look at the new output:

The log send queue size got back to normal at the primary replica level on the first datacenter by cascading all the previous replication traffic from the primary replica located to the second datacenter (AdvGrpDR availability group).

blog 110 - 6 - AG DMV monitoring distrib

From a wait statistics perspective, we got rid of HADR_DATABASE_FLOW_CONTROL meaning we did not saturated the network link between the 2 datacenters

blog 110 - 7 - AG wait stats distributed

The picture below confirms the replication traffic dropped drastically with this new configuration (150 Mbps vs 400 Mbps from the first architecture).

blog 110 - 8 - perfmon AG distributed

Bottom line

In this blog post I tried to demonstrate using distributed availability groups to cascade the replication traffic to another replica may be a good idea in order to address scenarios which include many secondary replicas on a remote location with a low network bandwidth. However introducing distributed availability groups has a cost in terms of management because we have to deal with an additional layer of complexity. But if the rewards make the effort worthwhile we should consider this kind of architecture.

 

 

 

 

 

Cet article SQL Server 2016: distributed availability groups and cascaded replicas est apparu en premier sur Blog dbi services.

Simple CRUD Implementation with Oracle JET - Part III

Andrejus Baranovski - Mon, 2016-12-05 12:09
This is third and last part in JET CRUD series. I already had a post about JET CRUD few months ago - Very Practical CRUD with JET and ADF BC - POST and DELETE Methods. But current approach is more simple and better structured. I will go through CREATE, UPDATE and DELETE operations.

You can download or view source code (JET app and ADF BC REST app) directly in GitHub  repository - jetcrud.

CREATE

We need to initalize new model when creating new row, this is done in addCustomer.js - empty model is initialized. Model is initialized from common module definition - customController (read more about it in my previous post - Better Oracle JET Code Structuring with Your Own Modules - Part II):


Important trick which will save lines of code - you can reference model attributes directly, no need to list all of them in advance. Data bind form component to JET model variable:


Each input could point to the attribute, this is UI will get/set value to View Model:


New model must be posted through JET model API create operation. Here we can convert model to JSON and use it as parameter for create:


This is how it looks on UI - form with new customer data. Press Save button:


Behind the scenes REST POST method will be executed and data will be posted to the backend, where it will be processed and inserted into DB:


New row is saved and visible in the results table:


UPDATE

Edit form implementation is very similar to create, actually it can be even combined into one. The only difference is how you initialize current row for editing (check - Oracle JET CRUD - Search and Edit Form - Part I). And how you save the changes - must use different method from Oracle JET model API - save. The rest is very similar to create operation handling:


Edit form UI - it renders different set of fields:


Behind the scenes it executes REST PATCH method, which will update only changed attributes:


DELETE

This is the simplest operation from all three. You should get instance of row to delete. This can be done through JavaScript promise, JET would not return actual model instantly. It return promise and you get model asynchronously. Call JET model API destroy operation to remove row entry:


Row is selected and removed:


Behind the scenes it calls REST DELETE method and removes row from backend:


Read previous posts:

1. Oracle JET CRUD - Search and Edit Form - Part I
2. Better Oracle JET Code Structuring with Your Own Modules - Part II

UKOUG 2016 Day 2

Yann Neuhaus - Mon, 2016-12-05 12:07

uk2

Today I assisted at a first session about one of my favorite tool: Upgrade to EM 13c now. The session was presented by Phil Gric from Red Stack Tech.

At the begining he described us the most common mistakes while implementing Enterprise Manager:

- EM 13c is an enterprise application

- It is a critical part of your infrastructure

- it is designed to help you

- EM 13c is not a glorified db console

- IT manager should not see EM as a job for DBA

He described us the main pre requisites before to realize an EM 13c upgrade ( for example disable optimizer_adaptive_features). He also talked about isssues such as the upgrade will create users with the sysman password, we should ensure that the repository password policy accept such a password.

There is also an issue while upgrading agent on AIX to 13.2 version. There is a problem securing the agent due to SHA encryption (Metalink Note 1965676.1).

To complete his presentation, he described us the main new features in EM 13c: export and import of incident rules, incident compression, always on monitoring, in emcli more than 300 new verbs and a general functionnality improved, system broadcast , comparaison and drift management.

He finally explained us why for him it is important to regularly upgrade to the last EM13c version: it is easy to upgrade, and the longer you wait, the closer it is to the next upgrade :=))

The second presentation was about the 12c upgrade : the good , the bad and the ugly presented by Niall Litchfield. He talked about his experiences about upgrading to 12c a very huge infrastructure composed of more than 100 servers, with database version from 10.1 to 11.2.0.3, with RAC or single instances.

His first advice was to read the Mike Dietrich documentation (Update, Migrate , Consolidate to 12c), and to have a look at the Oracle recommanded patch list.

A good reason to upgrade is because the support for 11g ends at teh end of the year, and the extended support is expensive.

The good news after this huge upgrade was that there has been no upgrade failures (tens of clusters, hundreds of servers and databases), a performance benchmark showed a 50 % improvement.

The bad and ugly news concern the number of patches. It also concern the JSON bundle patches which require database bundle patches. He also adviced us to turn off the optimizer_adaptive_features (recommanded also to be disabled with EM13c, PeopleSoft and EBS). Finally a last ugly point is the documentation, there is no one place to read the documenation but many. He also recommended to allow significant time for testing the database and the applications after the upgrade to 12c.

Then I assisted at a session talking about Oracle database 12c on Windows animated by Christian Shay of Oracle.

He showed us the database certification on 64-bit Windows. In a short resume Oracle 12..2 is certified on Windows server 2012, Windows Server 2012 R2, Windows 10 and Windows Server 2016, as Oracle 12.1 is certified on the same servers except Windows Server 2016.

In Windows 8 and Windows Server 2012, Microsoft has introduced the Group Managed service Account (GMSA), i.e. a domain level account which can be used by multiple servers in that domain to run their services under this account. A GMSA can be the Oracle Home user for Oracle Database Real Application Clusters (Oracle RAC), single instance, and client installations. It has similarities with the ‘oracle’ user on Linux, as you are able to connect on windows with this user and perform administrative tasks  like create database, install Oracle or upgrade databases.

In Windows 7 and Windows Server 2008 R2, Microsoft introduced virtual accounts. A virtual account can be the Oracle home user for Oracle Database single instance and client installations.

The recommandations are the following: for DB server (single instance) use virtual account to avoid password management (12.2), for 12.1 specify a Windows user account during installation. For RAC DB and Grid infrastructure, use a domain user or group managed service account, for a GMSA you do not need to provide the password for any database operation.

He also talked about large page support for windows. When large page support is enabled, the CU are able to access the Oracle database buffers im RAM more quickly. It will address the buffers in 2 MB page size instead of 4 KB increments.

Large pages can be used in two modes : Regular or Mixed mode. The regular one means all the SGA is attempted to be allocated in large pages. By the way if the amount of large pages is not available the database will not come up. Thats the reason using the mixed mode is perhaps better, if all the SGA cannot be allocated in large pages, the rest of the pages will be allocated with regular pages and the instance will come up.

I finished my UKOUG day by assisting at Franck Pachot’s session talking about 12c Mutltitenant (not a revolution but an evolution). He clearly explained us that we did not have to fear about 12c mutlitenant, from the begining of Oracle there has been a lot of new features a lot people feared, but now they are impelemented and work correctly. By the way the patch upgrade optimization is partially implemented, we will see how 12c multitenant will evolve in the next years.

 

 

 

 

 

Cet article UKOUG 2016 Day 2 est apparu en premier sur Blog dbi services.

About Joins

Tom Kyte - Mon, 2016-12-05 10:46
Hello I am facing problem with alias name in the joins. Pleae help me out. Below is the example select y.employee_id from ((select employee_id,department_id,manager_id from employees)a join (select department_id,manager_id from departme...
Categories: DBA Blogs

Dynamic Cursors

Tom Kyte - Mon, 2016-12-05 10:46
i have to write a procedure where the procedure takes a parameter say employee_status(Retired/Not-Retired) based on the parameter i've to use a cursor select employees from mytable who are retired or not retired and loop over all the employees (Ret...
Categories: DBA Blogs

Import Data from Excel to DB Table

Tom Kyte - Mon, 2016-12-05 10:46
Hi I need all-time executable SP that can import data from an Excel and do some DML operation on live tables. First of all, is that possible? IF so 1. what are the many ways that we can do that? 2. How do we do that using a Stored Procedu...
Categories: DBA Blogs

Introducing On Demand Training from Rittman Mead

Rittman Mead Consulting - Mon, 2016-12-05 08:00

Rittman Mead is happy to announce that its much anticipated On Demand Training (ODT) service is live, giving people the opportunity to receive expertly written & delivered self-paced training courses that can be accessed online anywhere, at anytime.

We have been delivering technical & end-user courses based on Oracle Analytics products all over the world for the past decade.

While our classroom sessions continue to offer an unrivalled experience for our trainees, we understand that in the modern era, flexibility is important.

ODT has been built based on the feedback of our clients and network so that you can:

  • Experience our training regardless of the distance to travel

  • Keep your member’s of staff on site at crucial periods in your company’s calendar

  • Give participants the ability to reinforce the lessons they’ve learnt afterwards

Learn

Use Rittman Meads LMS as your virtual classroom to access all course materials, lesson demos and slides

Practice

Get hands on with your very own cloud based training environment

Engage

Submit questions to Rittman Meads Principal Trainer network on subjects that might be specific to your everyday use of the product

Each course provides 30 days access to the above, ensuring you have enough time to learn at your pace and re-enforce each lesson.

We’re feeling particularly seasonal down here in Brighton, so to celebrate the launch of our platform we’re offering a 40% discount on our first live course OBIEE 12c Front End Development & Data Visualization between now and January 31st.

Simply use the discount code RMODT12C on checkout to take advantage of this exclusive offer.

For more details and to start your On Demand learning experience with Rittman Mead please check out:

  • Our webpage where you can find out more information about ODT and register an account for our LMS
  • The Rittman Mead LMS where you can view our course catalog and purchase courses

You can also contact training@rittmanmead.com if you have any questions about the service.

Happy Learning!!!

Categories: BI & Warehousing

UKOUG Tech 2016 - Super Sunday

Andrew Clarke - Mon, 2016-12-05 07:48
UKOUG 2016 is underway. This year I'm staying at the Jury's Inn hotel, one of a clutch of hotels within a stone's throw of the ICC. Proximity is the greatest luxury. My room is on the thirteenth floor, so I have a great view across Birmingham; a view, which in the words of Telly Savalas "almost takes your breath away".

Although the conference proper - with keynotes, exhibition hall and so on - opens today, Monday, the pre-conference Super Sunday has already delivered some cracking talks. For the second year on the trot we have had a stream devoted to database development, which is great for Old Skool developers like me. Fighting Bad PL/SQL, Phillip Salvisberg The first talk in the stream discussed various metrics for assessing the the quality of PL/SQL code: McCabe Cyclic Complexity, Halstead Volume, Maintainability Index. Cyclic Complexity evaluates the number of paths through a piece of code; the more paths the harder it is to understand what the code does under any given circumstance. The volume approach assesses information density (the number of distinct words/total number of words); a higher number means more concepts, and so more to understand. The Maintainability Index takes both measures and throws it some extra calculations based on LoC and comments.

All these measures are interesting, and often insights but none are wholly satisfactory. Phillip showed how easier it is to game the MI by putting all the code of a function on a single line: the idea that such a layout makes our code more maintainable is laughable. More worryingly, none of these measures evaluate what the code actually does. The presented example of better PL/SQL (according to the MI measure) replaced several lines of PL/SQL into a single REGEXP_LIKE call. Regular expressions are notorious for getting complicated and hard to maintain. Also there are performance considerations. Metrics won't replace wise human judgement just yet. In the end I agree with Phillip that the most useful metric remains WTFs per minute. REST enabling Oracle tables with Oracle REST Data Services, Jeff SmithIt was standing room only for That Jeff Smith, who coped well with jetlag and sleep deprivation. ORDS is the new name for the APEX listener, a misleading name because it is used for more than just APEX calls, and APEX doesn't need it. ORDS is a Java application which brokers JSON calls between a web client and the database: going one way it converts JSON payload into SQL statements, going the other way it converts result sets into JSON messages. Apparently Oracle is going to REST enable the entire database - Jeff showed us the set of REST commands for managing DataGuard. ORDS is the backbone of Oracle Cloud.

Most of the talk centred on Oracle's capabilities for auto-enabling REST access to tables (and PL/SQL with the next release of ORDS). This is quite impressive and certainly I can see the appeal of standing up a REST web service to the database without all the tedious pfaffing in Hibernate or whatever Java framework is in place. However I think auto-enabling is the wrong approach. REST calls are stateless and cannot be assembled to form transactions; basically each one auto-commits. It's Table APIs all over again. TAPI 2.0, if you will. It's a recipe for bad applications.

But I definitely like this vision of the future: an MVC implementation with JavaScript clients (V) passing JSON payloads to ORDS (C) with PL/SQL APIs doing all the business logic (M). The nineties revival starts here. Meet your match: advanced row pattern matching, Stew AshtonStew's talk was one of those ones which are hard to pull off: Oracle 12c's MATCH RECOGNIZE clause is a topic more suited to an article with a database on hand so we can work through the examples. Stew succeeded in making it work as a talk because he's a good speaker with a nice style and a knack for lucid explanation. He made a very good case for the importance of understanding this arcane new syntax.

MATCH RECOGNIZE is lifted from event processing. It allows us to define arbitrary sets of data which we can iterate over in a SELECT statement. This allows us to solve several classes of problems relating to bin filtering, positive and negative sequencing, and hierarchical summaries. The most impressive example showed how to code an inequality (i.e. range) join that performs as well as an equality join. I will certainly be downloading this presentation and learning the syntax when I get back home.

If only Stew had done a talk on the MODEL clause several years ago. SQL for change history with Temporal Validity and Flash Back Data Archive, Chris SaxonChris Saxon tackled the tricky concept of time travel in the database, as a mechanism for handling change. The first type of change is change in transactional data. For instance, when a customer moves house we need to retain a record of their former address as well as their new one. We've all implemented history like this, with START_DATE and END_DATE columns. The snag has always been how to formulate the query to establish which record applies at a given point in time. Oracle 12C solves this with Temporal Validity, a syntax for defining a PERIOD using those start and end dates. Then we can query the history using a simple AS OF PERIOD clause. It doesn't solve all the problems in this area (primary keys remain tricky) but at least the queries are solved.

The other type of change is change in metadata: when was a particular change applied? what are all the states of a record over the last year? etc. These are familiar auditing requirements, which are usually addressed through triggers and journalling tables. That approach carries an ongoing burden of maintenance and is too easy to get wrong. Oracle has had a built-in solution for several years now, Flashback Data Archive. Not enough people use it, probably because in 11g it was called Total Recall and a chargeable extra. In 12C Flashback Data Archive is free; shorn of the data optimization (which requires the Advanced Compression package) it is available in Standard Edition not just Enterprise. And it's been back-ported to 11.2.0.4. The syntax is simple: to get a historical version of the data we simply use AS OF TIMESTAMP. No separate query for a journalling table, no more nasty triggers to maintain... I honestly don't know why everybody isn't using it.

So that was Super Sunday. Roll on Not-So-Mundane Monday.

Oracle Certification Dumps Can Ruin Your Career

Complete IT Professional - Mon, 2016-12-05 05:00
Are you looking to get certified? Want to get your hands on some Oracle certification dumps for an exam to help you pass? That’s a bad idea. I’ll explain why in this article. What Is a Brain Dump? So what is a brain dump, when you talk about IT certifications? A brain dump is a […]
Categories: Development

UKOUG – Tech16 – Super Sunday

Yann Neuhaus - Mon, 2016-12-05 04:38

Screen Shot 2016-12-05 at 11.31.49

 

This year, I had the opportunity to attend the UKOUG 2016 which took place in Birmingham. This event normally begin on Monday but each year, there is a complimentary afternoon with high technical presentation for those who are registered for the Tech16 called super Sunday.

For this first Super Sunday afternoon at UKOUG 2016, I followed 2 sessions and I participated to an hands on lab around the cloud.

The 1st session was very interesting with lost of useful information about APEX and nodeJS.

This session was called NodeJS & Oracle – A Match Made in Heaven and the presenter, Mr Trond Enstad, focused the session to demonstrate the power of using Node.js.

He installed, Node.js, an Oracle Database client and created Node.js config file extracting sar command informations storing them in a remote Oracle Database. After that, he quickly created D3 charts in APEX showing real time monitoring of those stored information. I’m really enthousiaste to do some tests.

The 2nd session “Auto REST Enabling Your Tables, Views, & More With SQL Developer & REST Data Services” from Mr Jeff Smith was also interesting providing useful information about the ORDS product from Oracle.

After these 2 interesting sessions, I followed an Oracle Cloud Platform Hands On Lab called “Cloud Native Stack on Oracle Bare Metal Compute”.

In this labs, we created a Virtual Cloud Network (VCN) in which we were able to create a bare metal instance with Oracle Linux 7.2. Once launched, we installed MongoDB, we setup Node.js and MEAN.js. At the end, we were able to access the mean home page.

It was interesting to see how fast, we were able to provision a Bare Metal instance to install application components on it.

See you tomorrow for other interesting sessions.

 

Cet article UKOUG – Tech16 – Super Sunday est apparu en premier sur Blog dbi services.

Ten Years and Counting

Steven Chan - Mon, 2016-12-05 02:05

This blog turned 10 this year.  When I started it in 2006 with this article, I had no idea that it would become what it is today.  Things certainly turned out differently than I expected. 


Photo credit: Fulvio Spada, Wikimedia Commons

This little blog now gets millions of visits a year.  It has helped us shape EBS to meet your needs better.  I've been schooled in the many ways that we can do things better for you.  I've been chastised for offhand editorial remarks and have learned to deal with hate mail.  I've learned how hard it is to be a journalist, to state just the facts and pare prose down to its simplest form, accessible to all (with this verbose article being an exception to that rule).

Over the years, I've had several people who have helped me paint this fence. Some of those people have since joined the ranks of Oracle's esteemed alumni.  Some still pitch in occasionally.  Either way, I'm grateful for their contributions over the years.  As with the Stone Soup parable, this little corner of the world is richer for their sharing their expertise with us.

I am sometimes mistaken as an expert, but the real experts are my colleagues and friends in the E-Business Suite division and our sister organizations across Oracle.  They deserve the real credit for all of their dedication, professionalism, and expertise. They are a brilliant team and I am always humbled by their desire to do the best for you.  I am privileged to work alongside them and talk about their achievements on their behalf.  Please join me in thanking all of them for their hard work in polishing EBS to a shine for you. 

And, of course, thank you for joining me on this journey.  I am fortunate to have many of the world's leading EBS experts as readers and friends, and they contribute to this blog in ways large and small.  This blog would not be the same without the lively interactions with all of you.  I look forward to seeing what lies down the road around the corner!

Categories: APPS Blogs

Multitenant internals – Summary

Yann Neuhaus - Mon, 2016-12-05 02:02

Today at UKOUG TECH16 conference I’m presenting the internals of the new multitenant architecture: 12c Multitenant: Not a Revolution, Just an Evolution. My goal is to show how it works, that metadata links and object links are not blind magic.
Here are the links to the blog posts I’ve published about multitenant internals.

Fichier 05.12.16 07 39 43
The dictionary separation, METADATA LINK and OBJECT LINK (now called DATA LINK): :
http://blog.dbi-services.com/multitenant-dictionary-what-is-stored-only-in-cdbroot/
http://blog.dbi-services.com/oracle-12c-cdb-metadata-a-object-links-internals/
http://blog.dbi-services.com/oracle-multitenant-dictionary-metadata-links/
http://blog.dbi-services.com/oracle-multitenant-dictionary-object-links/
http://blog.dbi-services.com/multitenant-internals-how-object-links-are-parsedexecuted/
http://blog.dbi-services.com/multitenant-internals-object-links-on-fixed-tables/
An exemple with the AWR views:
http://blog.dbi-services.com/12c-multitenant-internals-awr-tables-and-views/
How the upgrades should work:
http://blog.dbi-services.com/oracle-multitenant-dictionary-upgrade/
What about shared pool rowcache and library cache:
http://blog.dbi-services.com/oracle-multitenant-dictionary-rowcache/
http://blog.dbi-services.com/12c-multitenant-cursor-sharing-in-cdb/
And how to see when session switches to CDB$ROOT:
http://blog.dbi-services.com/oracle-12cr2-multitenant-containers-in-sql_trace/

If you are in Birmingham, I’m speaking on Monday and Wednesday.

CaptureUKOUGFeaturedSpeaker

 

Cet article Multitenant internals – Summary est apparu en premier sur Blog dbi services.

node-oracledb 1.12.0-dev available for preview

Christopher Jones - Mon, 2016-12-05 00:57

A preview of node-oracledb 1.12.0-dev is available on GitHub and can be installed with:

  npm install oracle/node-oracledb.git#v1.12.0-dev

Node-oracledb is the Node.js add-on for Oracle Database.

The key things to look at are all the LOB enhancements - we want feedback. There are a couple more LOB enhancements we want to make before we go production, but I know you'll like the direction. Drop us a line via GitHub or email with your comments.

This release also has a connection pool ping feature that will improve application availability when transient network outages occur. It's enabled by default and will be transparent to most users, however it can be tuned or disabled if you have a special situation.

The CHANGELOG has all the other updates in this release. I'll blog all the features in more detail when a production bundle is released to npmjs.com.

Resources

Issues and questions about it can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

Node-oracledb documentation is here.

UKOUG Super Sunday

Yann Neuhaus - Sun, 2016-12-04 16:30

uk1

Today at the UKOUG Super Sunday in Birmingham, I had the opportunity to assist at interesting conferences.

The first presentation was about Oracle RAC internals and its new features in version 12.2.0.1 on Oracle Cloud. The main new features concern the cache fusion, the undo header hash table, the leaf nodes, and the hang manager.

In 12c release 2 in a RAC environment, the cache fusion automatically chooses an optimal path; the cache fusion collects and maintains statistics on the private network, and will use this information to find the optimal path network or disk to serve blocks. We can consider that flash storage will provide better acces time to data than the private network in case of high load.

In order to reduce remote lookups, each instance maintain a hash table of recent transactions (active and commited). So the Undo Header Hash table will improve the scalibility by eliminating remote lookups.

Flex Cluster and leaf nodes were implemented in 12cR1. With 12cR2, it is now possible to run read-only workload on instances running on leaf nodes.

Hang Manager has been introduced with 12cR2. It determines sessions holding resources on which sessions are waiting.  Hang Manager has the possibility to detect hangs across layers.

The second was about the use of strace, perf and gdb. This was a very funny presentation with no slides, only technical demos. It was talking on how to use strace, perf or gdb without being an expert. The speaker showed us the different analysis we can realize with strace gdb or perf in case we realize a sql query over a table in a file system tablespace or an ASM tablespace.

Using those tools allowed us to understand the mechanism of physical read and asynchronous I/O, and showed us the differences between asynchronous I/O and direct path read between ASM and file system.

It showed us that the use of strace and gdb is very simple but not recommended in a production environment.

The last session was talking about dba_feature_usage_statistics, and the speaker describes us the components behind the scene.

This view  as its name indicates it displays information about database feature usage statistics. The view gives an overview of each option pack taht have been used in the database and are currently in use. It pprovides also information when the product was first used and when it was used for the last time.

It is not very easy to find information in the Oracle documentation about how this view is populated. But the speaker gave us important information about wrl$_dbu_usage_sample, wrl$_dbu_feature_usage and wrl$_dbu_feature_metadata which are important for the dba_feature_usage_statistics view.

He also showed us a method to refresh manually the view dba_feature_usage_statistics.

Tomorrow another day of interesting sessions is waiting for us !

 

 

Cet article UKOUG Super Sunday est apparu en premier sur Blog dbi services.

Oracle 11g Database , and Weblogic Server 11g Forms , Reports and facing dead slow

Tom Kyte - Sun, 2016-12-04 16:26
i am using Oracle 11g Database , and Weblogic Server 11g Forms , Reports and facing dead slow issue at developers level as well as user level. i test 6i forms on same database same tables its running ok please any one help me what could be the ...
Categories: DBA Blogs

string patterns

Tom Kyte - Sun, 2016-12-04 16:26
I need to write a query which will look for the data in a particular column for the strings of patterns nnnnnn or Annnnn where n stands for number and A stands for capital letter. Any strings which are in a different pattern other than these two shou...
Categories: DBA Blogs

LOG(2,4/8/16 Issue

Tom Kyte - Sun, 2016-12-04 16:26
Hi Chris&Connor..Good Morning! You both are doing good job. My question is: The minimum height of a Red-Black tree H = FLOOR(LOG(2,N)), where N is nodes. Ideally, when N=4 then H=2 and when N=8 then H=3 and so on as per the formula. But, the LOG(...
Categories: DBA Blogs

Tom - what is going to happen to this website?

Tom Kyte - Sun, 2016-12-04 16:26
I wish you much happiness in your upcoming retirement. What will happen to this website? Could one buy an archived copy? It has been such an amazing resource through my career. Even if you are going to hang it up, I'm sure it is still very usefu...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator