Feed aggregator

sum() at each level in hierarchial query

Tom Kyte - Fri, 2018-11-09 18:06
consider the following query select level,sys_connect_by_path(last_name,'\') name,salary from employees start with last_name='King' connect by prior employee_id=manager_id is it possible to get the result like name sum_of_all_l...
Categories: DBA Blogs

Date in comparison with string

Tom Kyte - Fri, 2018-11-09 18:06
Hi Tom, I have a table named RTBS_SUBS_EVENT and there is a date column named expiry_date. expiry_date is a partitioned column too. a column named ID and it's indexed. NLS_DATE_FORMAT of the database is 'DD-MON-RR'. I run this query and th...
Categories: DBA Blogs

Join elimination

Tom Kyte - Fri, 2018-11-09 18:06
Hello All, While making some tests using LiveSQL, which is Oracle 18c, I found a strange behavior regarding join elimination. That is, join elimination DOES happen even if the constraints are DEFERRED ! My LiveSQL test script can be found he...
Categories: DBA Blogs

Is there any point backing up flashback 'flb' files?

Tom Kyte - Fri, 2018-11-09 18:06
Hi, Is there any point/benefit in backing up flb files to tape using an OS utility? Or, to put it another way: would backing up flb files just be a waste of tape? Thanks
Categories: DBA Blogs

SQL Pass Summit 2018: SQL Server 2019 features coming soon!

Yann Neuhaus - Fri, 2018-11-09 15:13

It’s the 20th anniversary of the event and the keynote today was amazing (see the blog from Christophe) for different reasons but the more important is that is every time better people. Not only DBA’s but also people using their application connected to the database.
Today, one of my focus will be the next version of SQL Server 2019 with the session of Bob Ward, Asad Khan & Amit Banerjee from Microsoft.
sql2019_01
I already tested some new features and wrote about it, but it’s really good to see a session about SQL Server 2019 and to discover new features that can help us.
Before writing about the session, I refer the keynote few minutes demo done by Bob Ward & Connor Cunningham and I will share this with you because today it’s not available in the CPT. It will be in the future release of SQL Server. The problem is if you have a huge workload and Tempdb is solicited. You notice in tempdb that only sys tables are the cause…What can we do? We do not have access to these tables, they are internal… And Connor comes with an update of SQL Server and it was fix! AMAZING…It’s really a dream for us!!! Connor explained that the SQLServer development Team developed a solution using in-memory technology and put sys tables in-memory to avoid this problem. This is illustrated in the picture as “tempdb: it Juts Runs Faster” and is part of the Intelligent Query Processing like Memory Grant Feedback or Adaptive Joins already available in SQL Server 2017 in the Adaptive Query Processing family.
The name is also changing from Adaptive to Intelligent! :lol: sql2019_02
After this future new feature, let’s focus on the Roadmap for SQL Server: Inside SQL Server 2019.
I will not write about features already in the CPT but features coming soon.
On the security I’m very curious to test the “Static Data masking”.
We will be able to mask data to users with admin access.
It can useful doing a copy from prod to dev for developers having db_owner role…  they won’t see the data in dev! Perfect for us!
sql2019_03
On the High Availability topic, they announced that system databases can be finally in Availability Groups. You cannot imaging the joy that this announcement does for all of us!
The other news is the Accelerated Data Recovery. This new feature helps in case of a rollback from a large query to come faster in the precedent state. I was in a session explaining how it’s working and it was very interesting and I’m excited to test it on-premises.
sql2019_4
During the demo at the keynote, Bob used a new function that I find very useful, it is fn_ PageResCracker associated to the DMV dm_db_page_info to troubleshoot page resource.

sql2019_5And to finish, the last slide of the presentation shows the next direction of SQL Server.
SQL2019_6Dbi services SQL Server Team will be happy to test all these new features and blog about them!

Cet article SQL Pass Summit 2018: SQL Server 2019 features coming soon! est apparu en premier sur Blog dbi services.

How do I add an image to a page in Oracle APEX?

Joel Kallman - Fri, 2018-11-09 14:04
Preface:  Many blog posts about Oracle APEX seem to cover a new feature or something rather obtuse & complex.  The sophisticated blog posts have little relevance to someone who is completely new to APEX and Oracle Database.  Having worked with APEX since 1999, it's always difficult for me to put myself in the shoes of a brand new student of APEX.  When I interact with people new to APEX, I try to look at APEX through their eyes, and listen to what they find confusing.  Sometimes, it's completely obvious to me and, at the same time, completely vexing to them.  Thus, this will begin a series of blog posts how to perform very common tasks in APEX.  It will be boring for the experienced user, but hopefully helpful to the APEX developer in training.  And separately, we will also strive to make APEX simpler and easier to understand, so that blog posts like this one become unnecessary in the future.



Gerald Venzl, a highly respected Oracle Database product manager, was recently participating in a hackathon with a team of people, and they were using Oracle APEX as part of their solution.  They produced a QR code for the URL to their app, they saved this QR code image to a local file, and they wanted to include the image on a page in their APEX application.  As Gerald stated, it took more than 30 minutes for this learned and competent group of people to figure out how to do this.  This is not a criticism of Gerald and his colleagues, it's a criticism of APEX.  Gerald and his team were expecting a simple upload to a page item of type Image and they would be done, right?  Well, not so simple.

This blog post is not intended to cover the case where you have images in BLOB columns in a database table (we'll cover that in the future), or the case where the image can already be referenced via a URL.  I am presenting the simple case of you having an image on the file system on your local computer, and you want to reference it in your APEX page.

From a high-level, the steps are:
  1. Upload the file as a static application file.
  2. Using the HTML IMG tag, reference the static application file in your application.


Now, for the more detailed explanation.

Before you can reference an image in your APEX application, it has to be in a location which can be "served" by your Web server.  With APEX, you have access to the back-end Oracle Database, but you don't have access to the Web server file system.  Fortunately, there is an easy way to upload the image into the Oracle Database and have it served by your Web server when used within an APEX application.  These are called Static Application Files.

The steps are really quite simple:
  1. Edit your application in the Application Builder
  2. Click Shared Components
  3. Click Static Application Files
  4. Click Upload File
  5. Choose the file from your local computer
  6. Click the Upload button




In the picture above, this is the list of Static Application Files in the APEX Application Builder, after uploading file Three_Stooges.jpg.  Under the Reference column, there is the string #APP_IMAGES#Three_Stooges.jpg.  This is how you can reference the file when used in an HTML context in your APEX application.  This reference means nothing when used outside of an APEX application.  When someone is running your application, the APEX engine will replace the substitution string #APP_IMAGES# with an absolute URL reference to an embedded REST endpoint which will return the stored image.

To reference this image in your application, you just need to include a reference to it from an HTML IMG tag on your APEX page.  This image reference would have to be entered into the attributes of your APEX application wherever it's appropriate to embed HTML.  This includes the headers and footers of page regions, templates, and even the Source attribute of certain region types.  An easy way to get started is to reference an image in a region of type Static Content.

As an example, edit a page in Page Designer.  In the gallery at the bottom of the page, drag a region of type Static Content onto your page.  In the Source attribute of this region, enter the HTML string to reference the image:

<img src="#APP_IMAGES#Three_Stooges.jpg">

Obviously, reference your file and not the name of the file in my example (Three_Stooges.jpg).  Save your page and run.  That's all there is to it!






One last point.  If you do have access to the web server file system, and you wish to include static file references in your application, then definitely put them on your web server file system and simply serve them from there.  It will always be more efficient to have these served by a web server instead of having to fetch them out of the database.

Leveraging Snippets to Create Wiki Pages in Oracle Developer Cloud

Shay Shmeltzer - Fri, 2018-11-09 12:53

Snippets are a feature of Oracle Developer Cloud Service that gives you a place to store reusable pieces of code as part of your project. These are the type of code snippets that you don't want as part of your core Git repository, but that you still find useful. Snippets can be your own private ones or shared among your team.

One nice usage for code snippets is the ability to quickly include them in a wiki page. This allows you, for example, to create a template of a wiki page and then quickly apply it to a new page that you creates. Using the correct markup for your wiki page format (confluence in the example in the video), you can create a collection of templates. For example, a template for a feature page, a template for a meeting minutes page, etc.. then your team members can quickly create pages that conforms to these templates.

In the video below I show you how to leverage this combination step by step.

Categories: Development

Pass Summit – dbi visit day 4 (2)

Yann Neuhaus - Fri, 2018-11-09 12:38
Power BI 5 top custom visuals

The development of Power BI itself is fast. But, the means to represent data is too. I mean there are constantly new Power BI custom visuals developed by the community and Microsoft.
You can develop your own custom visuals and publish them on the marketplace for your organization or the whole community.
Tips : check if the visuals are certified, meaning they have been tested on the different Power BI possibilities
You can find the gallery of custom visuals here: https://Appsource.microsoft.com/en-us/marketplace

Here are 5 different ones presented during the PASS Summit session:

1- SandDance
Used to display every data element of your data set on XY axis. It enables also to zoom in the data to isolate a plotted area and create scenes to show your data with animations.

IMG_1071

2- Synoptic Panel
Connects areas in a custom image with attributes in your data model using a designer
Https://Synoptic.design in order areas in your image that you will map to your data set

IMG_1076

3- Infographic designer
Custom the representation of you data to your specific need or business even with your own images

IMG_1077

4- Timeline Storyteller
Multiple representation and layouts of data based on time axis you can organize in order to tell a story with your data

IMG_1079

5- PowerApps
Integrate Line of Busienss application into Power BI. This can be used as well to have Writebacks in Power BI.

IMG_1080

Development of your power App must be done in the website Https://Powerapps.com if you are integrate it using Power BI Desktop. You will have a difference of experience using the service version. In the service version including the PowerApps visual will allow you to create the powerapps automatically with the fields of your data set

Again an interesting data analytic day…stay tuned..

Cet article Pass Summit – dbi visit day 4 (2) est apparu en premier sur Blog dbi services.

Pass summit – day 4

Yann Neuhaus - Fri, 2018-11-09 12:01
Optimizing Multi-Billion Row Tables in Tabular in 2018

 

I wanted to attend the session moderated by Marco Russo to see his approach of optimizing performance in Tabular model
The first thing to understand is how the data is stored and organized in a Tabular model. It is of course using the xVelocity in-memory capabilities with the Vertipac column storage layout engine. It organizes the data, compressing  it by column,  in combination with a dictionary encoding mechanism.
Therefore the most important aspects in a model to take care at are:

  • Reducing the number of distinct values of columns
  • Focus on columns with many distinct values

What impacts the performance of data retrieval is also  the relationships in the model that define the cardinality. The higher the cardinality is, the slower the query will be.

In order to measure there criteria the following tools are available:

  • Vertipac analyzer: it can help you to get insight on your tabular model, to get information on the number of items in your columns and their size.
  • DAX studio: it is an open source that you can download visiting https://Daxstudio.org

DAX Studio will help you to analyze how your queries are performing and you will be able to see how the performance is related to the number of cores, the size of the columns queried and as already said above the size of the cardinality involved.

What is recommended for tabular model is to have the fastest CPU speed (>= 3GHz) and CPU L2/L3 large caches; large cache of the CPU is essential to avoid quickly performance problems, but having to many CPU can be counterproductive, having more sockets can negatively impact your performance. Fastest RAM is also recommended(>=3200). You have to be aware of the latency of memory and storage access and see the huge impact it has on the performance.

Marco delivers best practice using the Azure tiers available
IMG_1081

The way you can influence having too high cardinality between fact and dimension is reduce the dimension with sub category but also to reduce you fact by splitting it and doing aggregates. You have to re-shape your data model.

The great thting is that a new “Dual storage mode and aggregation option” is a new feature in Power BI Premium that enable to defined aggregation table of your big fact tables that do not fit in memory within Power BI.
IMG_1082

Again an interesting data analytic day…stay tuned..

Cet article Pass summit – day 4 est apparu en premier sur Blog dbi services.

Introduction to Oracle Digital Assistant Dialog Flow

Andrejus Baranovski - Fri, 2018-11-09 08:59
Oracle Digital Assistant is a new name for Oracle Chatbot. Actually it is not only a new name - from now on chatbot functionality is extracted into separate cloud service - Oracle Digital Assistance (ODA) Cloud service. It runs separately now, not part of Oracle Mobile Cloud Service. I think this is a strong move forward - this should make ODA service lighter, easier to use and more attractive to someone who is not Oracle Mobile Cloud service customer.

I was playing around with dialog flow definition in ODA and would like to share few lessons learned. I extracted my bot definition from ODA and uploaded to GitHub repo for your reference.

When new bot is created in ODA service, first of all you need to define list of intents and provide sample phrases for each intent. Based on this information algorithm trains and creates machine learning model for user input classification:


ODA gives us a choice - to user simpler linguistics based model or machine learning algorithm. In my simple example I was using the first one:


Intent is assigned with entities:


Think about entity as about type, which defines single value of certain basic type or it can be a list of values. Entity will define type for dialog flow variables:


Key part in bot implementation - dialog flow. This is where you define rules how to handle intents and also how to process conversation context. Currently ODA doesn't provide UI interface to managed dialog flow, you will need to type rules by hand (probably if your bot logic is complex, you can create YAML structure outside of ODA). I would highly recommend to read ODA dialog flow guide, this is the most complex part of bot implementation - The Dialog Flow Definition.

Dialog flow definition is based on two main parts - context variables and states. Context variables - this is where you would define variables accessible in bot context. As you can see it is possible to use either basic types or our own defined type (entity). Type nlpresult is built-in type, variable of this type gets classified intent information:


States part defines sequence of stops (or dialogs), bot transitions from one stop to another during conversation with the user. Each stop points to certain component, there is number of built-in components and you could use custom component too (too call REST service for example). In the example below user types submit project hours, this triggers classification and result is handled by System.Intent, from where conversation flow starts - it goes to the dialog, where user should select project from the list. Until conversation flow stays in the context - we don't need to classify user input, because we treat user answers as input variables:


As soon as user selects project - flow transitions to the next stop selecttask, where we ask user to select task:


When task is selected - going to the next stop, to select time spent on this task. See how we are referencing previous answers in current prompt text. We can refer and display previous answer through expression:


Finally we ask a question - if user wants to type more details about task. By default all stops are executed in sequential order from top to bottom, if transition is empty - this means the next stop will execute - confirmtaskdetails in this case. Next stop will be conditional (System.ConditionEquals component), depending on user answer it will choose which stop to execute next:


If user chooses Yes - it will go to next stop, where user needs to type text (System.Text component):


At the end we print task logging information and ask if user wants to continue. If he answers No, we stop context flow, otherwise we ask user - what he wants to do next:


We are out of conversation context, when user types sentence - it will be classified to recognize new intent and flow will continue:


I hope this gives you good introduction about bot dialog flow implementation in Oracle Digital Assistant service.

Sorting -- sort a string like a number.

Tom Kyte - Thu, 2018-11-08 23:46
I have a alphanumeric column in the database. It contains only numeric values.Is it possible to do a numeric sort directly by a sql command like "order by <column>"
Categories: DBA Blogs

Blocks read using index vs. full table scan

Tom Kyte - Thu, 2018-11-08 23:46
Hi Tom, I have a 3 part question about performance... 1) The first question is sort of a introductory summary question. 2) Then I have an example ending with a question. 3) Finaly I have a very specific 'important' question. Why is it some...
Categories: DBA Blogs

Column And Table Redefinition With Minimal Locking

Jeremy Schneider - Thu, 2018-11-08 20:22

TLDR: Note to future self… (1) Read this before you modify a table on a live PostgreSQL database. If you do it wrong then your app might totally hang. There is a right way to do it which avoids that. (2) Especially remember the lock_timeout step. Many blog posts around the ‘net are missing this and it’s very important.

Yesterday I was talking to some PostgreSQL users (who, BTW, were doing rather large-scale cool stuff in PG) and they asked a question about making schema changes with minimal impact to the running application. They were specifically curious about changing a primary key from INT to BIGINT.  (Oh, you are making all your new PK fields BIGINT right?)

And then, low and behold, I discovered a chat today on the very same topic. Seemed useful enough to file away on my blog so that I can find it later. BTW I got permission from Jim Nasby, Jim F and Robins Tharakan to blame them for this…  ;)

Most useful part of the chat was how to think about doing table definition changes in PostgreSQL with minimal application impact due to locking:

  1. Use lock_timeout.
    1. Can be set at the session level.
  2. For changes that do more than just a quick metadata update, work with copies.
    1. Create a new column & drop old column instead of modifying.
    2. Or create a new table & drop old table.
    3. Use triggers to keep data in sync.
    4. Carefully leverage transactional DDL (PostgreSQL rocks here!) to make changes with no windows for missing data.

We can follow this line of thought even for a primary key – creating a unique index on the new column, using existing index to update table constraints, then dropping old column.

One of the important points here is making sure that operations which require locks are metadata-only. That is, they don’t need to actually modify any data (while holding said lock) for example rewriting or scanning the table. We want these ops to run very very fast, and even time out if they still can’t run fast enough.

A few minutes on google yields proof that Jim Nasby was right: lots of people have already written up some really good advice about this topic.  Note that (as always) you should be careful about dates and versions in stuff you find yourself.  Anything pre-2014 should be scrutinized very carefully (PostgreSQL has change a lot since then); and for the record, PostgreSQL 11 changes this specific list again (and none of these articles seem to be updated for pg11 yet). And should go without saying, but test test test…

  • This article from BrainTree is my favorite of what I saw this morning. Concise yet clear list of green-light and red-light scenarios, with workaround for all the red lights.
    • Add a new column, Drop a column, Add an index concurrently, Drop a constraint (for example, non-nullable), Add a default value to an existing column, Add an index, Change the type of a column, Add a column with a default, Add a column that is non-nullable, Add a column with a unique constraint, VACUUM FULL
  • Citus has a practical tips article that’s linked pretty widely.
    • adding a column with a default value, using lock timeouts, Create indexes, Taking aggressive locks, Adding a primary key, VACUUM FULL, ordering commands
  • Alexey Vasiliev assembled a list in 2016 which is worth reviewing.
    • Add a new column, Add a column with a default, Add a column that is non-nullable, Drop a column, Change the type of a column, Add a default value to an existing column, Add an index, Add a column with a unique constraint, Drop a constraint, VACUUM FULL, ALTER TABLE SET TABLESPACE
  • Joshua Kehn put together a good article in late 2017 that especially illustrates the importance of using lock_timeout (though he doesn’t mention it in the article)
    • Default values for new columns, Adding a default value on an existing column, Concurrent index creation, ALTER TABLE, importance of typical transaction length

For fun and posterity, here’s the original chat (which has a little more detail) where they gave me these silly ideas:

[11/08/18 09:01] Colleague1: I have a question with regard to APG. How can we make DDL modifications to a table with minimalistic locking (downtime)?
[11/08/18 09:31] Jim N: It depends on the modification you're trying to make. Many forms of ALTER TABLE are very fast. Some don't even require an exclusive lock.
[11/08/18 09:32] Jim N: What you have to be careful of are alters that will force a rewrite of the entire table. Common examples of that are adding a new column that has a default value, or altering the type of an existing column.
[11/08/18 09:33] Jim N: What I've done in the past for those scenarios is to create a new field (that's null), put a before insert or update trigger on the table to maintain that field.
[11/08/18 09:33] Jim N: Then run a "backfill" that processes a few hundred / thousand rows per transaction, with a delay between each batch.
[11/08/18 09:34] Jim N: Once I know that all rows in the table have been properly updated, drop the old row, and maybe mark the new row as NOT NULL.
[11/08/18 09:43] Jim N: btw, I know there's been a talk about this at a conference in the last year or two...
[11/08/18 09:49] Jim F: What happens at the page level if the default value of an ALTER TABLE ADD COLUMN is null? Once upon a time when I worked at [a commercialized fork of PostgreSQL], which was built on a version of PostgreSQL circa 2000, I recall that the table would be versioned. This was a pure metadata change, but the added columns would be created for older-version rows on read, and probably updated on write. Is that how it currently works?
[11/08/18 09:55] Jim N: Jim F in essence, yes.
[11/08/18 09:56] Jim N: Though I wouldn't describe it as being "versioned"
[11/08/18 09:57] Jim N: But because columns are always added to the end of the tuple (and we never delete from pg_attribute), heap_deform_tuple can detect if a tuple is "missing" columns at the end of the tuple and just treat them as being null.
[11/08/18 09:57] Jim N: At least I'm pretty sure that's what's going on, without actually re-reading the code right now. &#x1f609;
[11/08/18 10:08] Jim F: does it work that way for non-null defaults as well? that would create a need for versioning, if the defaults changed at different points in time
[11/08/18 10:08] Robins: While at that topic.... Postgres v11 now has the feature to do what Jim F was talking about (even for non-NULLs). Although as Jim Nasby said, you still need to be careful about which (other) kind of ALTERs force a rewrite and use the Trigger workaround. "Many other useful performance improvements, including the ability to avoid a table rewrite for ALTER TABLE ... ADD COLUMN with a non-null column default"
[11/08/18 10:08] Jim F: exactly...

Did we get anything wrong here? Do you disagree? Feel free to comment. :)

Announcing Continuous Innovation on Oracle E-Business Suite 12.2

Steven Chan - Thu, 2018-11-08 12:18

Oracle announced a “Continuous Innovation” release model for Oracle E-Business Suite 12.2, delivering ongoing applications and underlying technology stack updates without a major upgrade. At the same time, we are announcing Premier Support for Oracle E-Business Suite 12.2 through at least 2030.

This means that Oracle E-Business Suite customers already running on 12.2 can continue to access new applications functionality and separately uptake later versions of underlying technology for years to come, without the time and expense of a major release upgrade. 

Oracle E-Business Suite customers historically have adopted new versions of the technology stack together with a new applications code level via a major release upgrade. This involved using Oracle’s Rapid Install tool to deliver new versions of Oracle Fusion Middleware platform technologies, a new applications code level, and upgrade scripts to move the schema, PL/SQL, and data in place to the new Oracle E-Business Suite release level.

It has been Oracle’s plan to move to a Continuous Innovation release model that separates technology stack updates from applications code updates, allowing E-Business Suite customers to consume later technology while retaining their current applications code level. We previously announced that we would start Continuous Innovation with the next major 12.X release, after the existing 12.2 release. We have since determined that we can implement Continuous Innovation on the existing 12.2 release, removing the need for a new 12.X release.

For more details, see:

Categories: APPS Blogs

Pass summit – dbi visit day 3

Yann Neuhaus - Thu, 2018-11-08 10:52
IMG_1050 The Microsoft data platform is evolving

The third day began with the Keynote help by Rohan Kumar the Corporate Vice President of Azure Data by Microsoft

The main message was “Hybrid data platform is the way for the future” and Microsoft is working in this direction.

The use of AI and analytic to transform the customer business is also a key driver for the Microsoft data platform, which is building to enable this easily.

Rohan remember the customer priorities:
– Modernizing on-premises
– Modernizing the cloud
– Building cloud native cloud application
– Unlocking insights

To support these priorities we got an overview of the last development on the data platform product or services:
Modernizing the on-premises solution with SQL Server 2019
○ More language support
○ Best security and performance
○ Power BI Report Server
○ Mission critical availability and performance
○ Security and compliance
○ Management and development
○ Big data and analytics

And Microsoft is announcing SQL Server 2019 CPT 2.1

One big evolution in term of performance is that some system tables will be moved in in-memory technology.
SQL Server 2019 is becoming a hub for integrating data and also enable the data injection in big data clusters combining Spark and SQL Server

Azure Data Studio si the new tool to manage all the Azure data services in order to see them a unified data platform

Modernizing the cloud with the new services or their new capabilities or functionalities
Azure database migration service
Azure SQL database managed instance with a business critical option available
Azure SQL Database Hyper scale (up to 100TB)
Accelerated databases recovery and machine Learning services in Azure SQL Database
Building cloud-native apps with Azure Cosmos DB
Multi-Master and Cassandra API

Unlocking insights
New features of Azure SQL Data Warehouse, like workload management
Partnership Informatica, tableau connectors
HDInsight 4.0
Real-time analytics
Azure event hubs for Kafka
Azure Data Explorer
Power BI Dataflow
Power BI SSRS Reports in Power BI
Azure Databricks new feature

The SQL Server product is evolving constantly and the Azure world event faster.

Again an interesting data analytic day…stay tuned..

Cet article Pass summit – dbi visit day 3 est apparu en premier sur Blog dbi services.

Gonzaga University Gives Oracle Cloud an “A” for Cost and Efficiency

Oracle Press Releases - Thu, 2018-11-08 08:00
Press Release
Gonzaga University Gives Oracle Cloud an “A” for Cost and Efficiency Leading private university modernizes infrastructure with Oracle to securely support critical applications

Redwood Shores, Calif.—Nov 8, 2018

Gonzaga University is using Oracle Cloud to streamline its operations and better serve its student and staff community. With Oracle Cloud Infrastructure and Oracle Database Cloud, the University has not only cut costs and reduced complexity but also upgraded to new versions of Ellucian and its ERP system in just seven months—all while improving security and reliability.

Ranked the fourth best regional university in the West for the fourth straight year by U.S. News & World Report, Gonzaga University is an exemplary learning community that educates students for lives of leadership and service for the common good. In keeping with its Catholic, Jesuit, and humanistic heritage and identity, Gonzaga models and expects excellence in academic and professional pursuits and intentionally develops the whole person—intellectually, spiritually, culturally, physically, and emotionally. When an aging IT infrastructure was struggling to support the University’s strategic goals, it knew it needed to rethink its approach.

“Achieving sustainable operations is one of the key tenets in Gonzaga’s strategic plan,” said Darren Owsley, Chief Technology Officer at Gonzaga. “Our existing IT infrastructure on premise would have required significant upgrades to support new technology moving forward as well as the applications that are critical to our operations today. After a competitive bid process, Oracle Cloud was the clear choice for what Gonzaga needed, both in price and performance.”

By moving its core functions to the cloud, Gonzaga was able to eliminate the need for time-consuming configurations and provision new servers in minutes versus weeks. This has dramatically improved efficiency, while also allowing for the acceleration of new projects. And with the ability to easily scale and pay for only the capacity it’s using, Gonzaga now has the cost-effective foundation it needs to grow well into the future. The University is also utilizing Oracle Cloud Infrastructure for disaster recovery.

“At a recent event, other universities were lamenting about cloud projects that took years,” added Borre Ulrichsen, Chief Information Officer at Gonzaga. “With Oracle, our implementation through to production took mere months. Moreover, I know our database and server infrastructure is being continually upgraded and patched by Oracle, taking the burden off of our limited IT resources. This has allowed us to focus more time on strategic initiatives that better serve our students and staff and continue to help make Gonzaga one of the top universities in the West.”

“Gonzaga was looking for a partner that would provide support every step of the way during its transition to the cloud,” said Kash Iftikhar, vice president of product and strategy, Oracle Cloud Infrastructure. “Oracle Cloud Infrastructure delivers the reliability needed – at the lowest cost, and highest performance. We are honored to be working with such a highly-esteemed institution, and helping them focus on their mission, instead of infrastructure.”

Gonzaga was also recognized at OpenWorld 2018 as a winner of the Oracle Excellence Award for Oracle Cloud Platform Innovation, awarded to customers and partners who are using Oracle Cloud Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) to accelerate innovation and drive business transformation by increasing agility, lowering costs and reducing IT complexity.

Contact Info
Danielle Tarp
Oracle
+1.650.506.2905
Danielle.Tarp@oracle.com
Katie Barron
Oracle
+1.202.904.1138
Katie.Barron@oracle.com
About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is an enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure can dramatically increase operational efficiency and lower total cost of ownership. For more information, visit https://cloud.oracle.com/iaas.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2905

Katie Barron

  • +1.202.904.1138

Leading Utilities Weather the Storm with Oracle

Oracle Press Releases - Thu, 2018-11-08 07:00
Press Release
Leading Utilities Weather the Storm with Oracle As weather volatility increases, customers look to Oracle Utilities to help maintain and restore power to millions

Redwood Shores, Calif.—Nov 8, 2018

Severe weather events are increasing in both frequency and intensity. To help keep power available while supporting predictable restoration for millions of affected customers, the top utilities across the globe rely on the performance and dependability of Oracle Utilities Network Management System (NMS).

According to the National Oceanic and Atmospheric Association, “In 2018 (as of October 9), there have been 11 weather and climate disaster events with losses exceeding $1 billion each across the United States.” Likewise, a study by the European Academies’ Science Advisory Council (EASAC) covered in Science Daily, noted that “extreme weather events have become more frequent over the past 36 years, with a significant uptick in floods and other hydrological events compared even with five years ago.”

When emergencies strike, utilities’ grid operations need to effectively respond to unplanned outages, integrate emergency and mutual-aid crews, and get accurate information to customers fast. Built from the ground-up as a fully-integrated outage and distribution management system, NMS gives utility operations teams accurate, timely, and reliable information critical for meeting the challenges of 21st-century grid operations. Today, more than 40 utility customers—including 6 of the top 9 utilities in the U.S.—depend on the industry-leading Oracle NMS platform in their control centers; several of which have the most demanding storm performance requirements in the industry. On the East Coast of the U.S. alone, power can be restored to 19.4 million customers faster thanks to Oracle NMS.

Customers rely on Oracle to navigate the storms

“The past three years has brought a relentless stream of hurricanes and storms, including Matthew, Irma and Michael,” said Josh Stallings, systems & standards manager, Georgia Power Company. “During each, Georgia Power Company was able to rely on Oracle NMS to bring order to an otherwise chaotic situation. In particular with Irma, which resulted in service interruptions for some 1,630,000 customers we had more than 425 concurrent users on NMS, allowing our operations staff to have a single valid source to monitor and understand what specific parts of the electrical distribution system were impacted, as well as validate the current state of various devices and equipment on the distribution system. The software also allowed us to perform meaningful customer communications ranging from broad-scale restoration targets down to event level communication for cases of trouble with repair crews onsite.”

“In Ireland, we are experiencing more extreme weather events each year,” said Robert Power, NMS system manager at ESB Networks. “Storm Darwin was a turning point in February 2014. The development team fully supported us implementing all required enhancement requests identified in the aftermath of this storm. Since going live with Oracle NMS in April 2017, the system has demonstrated exceptional performance for more than 150 users, coordinating and reporting on all outage activity. Hurricane Ophelia was our first real test, however, the intuitive and flexible nature of the application allowed us to configure storm rules, ETRs and UI as needed to reflect our unique user and system requirements to best manage the restoration effort.” 

Oracle NMS: The calm before, during and after the storm

Based on 30 years of market experience, NMS leverages market-leading outage management technology with distribution management and advanced analytics to empower utilities to maximize grid operation capabilities, shorten outage duration and optimize distribution, all while keeping their customers engaged and informed. NMS reported outstanding performance during the recent hurricane season, which included Florence and Michael. Oracle is now working closely with its utility customers to brace for extreme weather on the horizon this winter, 15 of which serve more than 1 million customer meters each.

“While we previously viewed a catastrophic episode such as Katrina as a once in a lifetime event, they are fast becoming yearly occurrences,” said Rodger Smith, senior vice president and general manager for Oracle Utilities. “As such, the technology to stabilize the world’s power grids and prepare them for uncharted weather territory needs to be equal parts cutting-edge and battle-tested. Oracle’s field-proven NMS system has enabled our customers to weather the most intense storms, helping maintain and restore power to millions. Utilities have enough to worry about during a major weather event, they will attest that our Network Management System isn’t one of them.”

Providing a “single-pane-of-glass” to maintain efficient grid operations, Oracle NMS presents the information control center operators need to effectively optimize the grid end-to-end from the substation to the end customer. Built on a data-centric, platform-based approach with integrated applications, Oracle NMS delivers unified user access to outage management (OMS), distribution management (DMS), distributed energy resource management (DERMS) as well as supervisory control and data acquisition (SCADA) systems.

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
Molly Hardy-Knowles
Hill+Knowlton Strategies
+1.713.752.1931
molly.hardy-knowles@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

Molly Hardy-Knowles

  • +1.713.752.1931

Where / Having

Jonathan Lewis - Thu, 2018-11-08 06:11

There’s a very old mantra about the use of the “having” clause that tells us that if it’s valid (i.e. will always give the same results) then any predicate that could be moved from the having clause to the where clause should be moved. In recent versions of Oracle the optimizer will do this for itself in some cases but (for reasons that I’m not going to mention) I came across a silly example recently where a little manual editing produced a massive performance improvement.

Here’s a quick demo:


rem
rem     Script:         where_having.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

reate table t1
as
select * 
from all_objects 
where rownum <= 50000   -- > comment to avoid WordPress format issue
;

spool where_having.lst

set serveroutput off

select /*+ gather_plan_statistics */ 
        object_type, count(*) 
from    t1 
group by 
        object_type 
having  count(*) > 0 
and     1 = 2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
;

The big question is: will Oracle do a full tablescan of t1, or will it apply a “null is not null” filter early to bypass that part of the plan. Here’s the plan pulled from memory, with run-time statistics (all versions from 11g to 18c):


--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      0 |00:00:00.02 |     957 |    955 |       |       |          |
|*  1 |  FILTER             |      |      1 |        |      0 |00:00:00.02 |     957 |    955 |       |       |          |
|   2 |   HASH GROUP BY     |      |      1 |      1 |     27 |00:00:00.02 |     957 |    955 |  1186K|  1186K| 1397K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |  50000 |  50000 |00:00:00.01 |     957 |    955 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter((COUNT(*)>0 AND 1=2))


As you can see, the filter at operation 1 includes the contradiction “1=2”, but Oracle tests this only after doing the full tablescan and aggregation. If you move the “1=2” into the where clause the tablescan doesn’t happen.

Interestingly, if you write the query with an in-line view and trailing where clause:


select /*+ gather_plan_statistics */
        *
from    (
        select
                object_type, count(*)
        from    t1
        group by
                object_type
        having  count(*) > 0
        )
where
        1 = 2
;

The optimizer is clever enough to push the final predicate inside the view (where you might expect it to become part of the having clause) and push it all the way down into a where clause on the base table.


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      0 |00:00:00.01 |
|*  1 |  FILTER              |      |      1 |        |      0 |00:00:00.01 |
|   2 |   HASH GROUP BY      |      |      1 |      1 |      0 |00:00:00.01 |
|*  3 |    FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   4 |     TABLE ACCESS FULL| T1   |      0 |  50000 |      0 |00:00:00.01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(COUNT(*)>0)
   3 - filter(NULL IS NOT NULL)



A quirky case of the optimizer handling the (apparently) more complex query than it does the simpler query.

Do we know for sure from the trace file, that the application uses transactions on the database?

Tom Kyte - Thu, 2018-11-08 05:26
We got deadlock. In the trace file there is the deadlock graph: <code> Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds wai...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator