Feed aggregator

Create Age Group Dynamically Based on Lower and Upper Age limit and age band

Tom Kyte - Tue, 2018-07-24 09:26
Hi, I have following requirement and seeking help to achieve this via SQL only (no PL/SQL block) - Table A contains following columns - 1. Employee_ID 2. Employee_Age 3. Employee_Dept The end user needs to provide following 3 prompt val...
Categories: DBA Blogs

SQL Server on Linux – I/O internal thoughts

Yann Neuhaus - Tue, 2018-07-24 07:55

Let’s start the story from the beginning with some funny testing I tried to perform with SQL Server on Linux a couple of months ago. At that time, I wanted to get some pictures of syscalls from SQL Server as I already did in a past on Windows side with sysinternal tools as procmon and stack traces. On Linux strace is probably one of the best counterparts.

blog 140 - 0 - 0 -  banner

Firstly, please note this blog is just from my own researches and it doesn’t constitute in any cases an official documentation from Microsoft and may lead likely to some inaccuracies. This kind of write up is definitely not easy especially when you’re not the direct developer of the product and because things change quickly nowadays making at the same time your blog post biased :) Anyway, I was just curious to figure out how SQL Server deals with I/O on Linux side and the main safety point here is certainly to show how you may achieve it on Linux. So let’s start from the beginning with already what we know on the Windows operating system: The SQL Server engine goes through Win32 API and functions like CreateFile(), ReadFile(), WriteFile() to deal with I/O but let’s focus specifically on the CreateFile() function here. CreateFile() is used to create or to open an existing file or an I/O device with some specific flags. Some of them as FILE_FLAG_WRITE_THROUGH are used to meet the Write-Ahead Logging (WAL) Protocol by bypassing all system / disk caches (cf. Microsoft article).

On April 10 2018 I did my first tests on the Linux side with SQL Server 2017 CU5 and here was my first strace output after creating dbi_db database:

blog 140 - 0 - 2 - strace stack twitter

blog 140 - 0 - 1 - strace stack

It was an expected output for me because on Linux SQL Server uses an low-level open() system call – that is the counterpart of createfile() on Windows – but the surprising thing was with O_DIRECT flag only. I’m not a system developer and from my position, I may understand benefits from using O_DIRECT with database systems because it is driving by AIO (asynchronous I/O) and by the fact we may completely bypass any kernel space buffers (by default on Linux). I get the opportunity to thanks @dbaffaleuf with our interesting discussions on this topic

But referring to the documentation we may read the following sample as well about O_DIRECT:

File I/O is done directly to/from user-space buffers. The O_DIRECT flag on its own makes an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC flag that data and necessary metadata are transferred

From my understanding using O_DIRECT that implies durable writes on block devices are not guaranteed and from my trace I noticed the transaction log seemed to be open with O_DIRECT in this case …

2837  20:44:32 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 187
2837  10:13:09 fstat(187, {st_mode=S_IFREG|0660, st_size=0, ...}) = 0

 

… From a WAL protocol perspective using such flag might lead to not meet the requirements because we might experience data loss in case of a system / storage outage unless either implementing another low-level mechanism like fsync() for transaction log and checkpoints or to be sure the storage guarantees writes are O_DIRECT safe. At this stage I expected to find out more fsync() related entries in my trace but no chance as shown below (I put only a sample but in fact no other relevant syscalls in the entire trace that might indicate forcing synchronization stuff)

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 72.59   12.993418        2089      6220      2503 futex
 10.78    1.929811        7509       257           epoll_wait
  9.08    1.625657       47813        34        28 restart_syscall
  3.28    0.587733        8643        68           io_getevents
  1.97    0.351806       70361         5           nanosleep
  1.92    0.344254       34425        10        10 rt_sigtimedwait
  0.32    0.056943           3     22116           clock_gettime
  0.02    0.003530         101        35           munmap
  0.01    0.002149          32        67           io_submit
  0.01    0.001706           6       273           epoll_ctl
  0.01    0.000897           6       154           read
  0.00    0.000765          11        68           writev
  0.00    0.000605           4       136        68 readv
  0.00    0.000591           4       137           write
  0.00    0.000381          10        40        23 open
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.900633                 29827      2638 total

 

A couple of weeks ago, I wanted to update my test environment with SQL Server 2017 CU8 (on Linux) I noticed the following messages (in the red rectangle):

blog 140 - 2 - forceflush

 

Hmm .. that was pretty new and interesting and in fact, the aforementioned messages were related to this Microsoft article (Thanks @Bobwardms to pointed me out). This new behavior is available since SQL Server 2017 CU6 and the article describes that how Microsoft has introduced a change with a new “forced flush” mechanism for SQL Server on Linux system.

In a nutshell for all scenarios, a new flush mechanism guarantees data is safely written to a stable media for transaction logs and during checkpoints. Let’s dig further into the both methods.

Let’s say first I applied the same pattern for all the tests that follow. The test’s protocol included one dbo.t1 table with only one column (id int). I inserted for each test a bunch of data rows (67 rows to be more precise related to 67 distinct implicit transactions) without any other user concurrent activities. It remains some internal stuff from SQL Server but I guess we may consider them as negligible compared to my tests.

insert dbo.t1 values (1)
go 67

 

  • Default forced flush mechanism behavior

 

In this scenario referring to my strace file output database files are still open with O_DIRECT only as shown below (my database’s name is toto this time)

4745  10:54:34 open("/u01/sqlserverdata/mssqlserver/toto.mdf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 188

 

I also used the following command to get directly a picture summary of the number of calls per syscall

$ sudo strace -f -c -o sql_strace.txt $(pidof sqlservr |sed 's/\([0-9]*\)/\-p \1/g')

 

Here the sample output I got:

$ cat sql_strace.txt
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 71.36   24.078231        2685      8969      3502 futex
 12.13    4.093680      120402        34        30 restart_syscall
 10.03    3.384817       12583       269           epoll_wait
  2.67    0.901541       13258        68           io_getevents
  1.77    0.598830       46064        13        13 rt_sigtimedwait
  1.66    0.560893       93482         6           nanosleep
  0.23    0.077873           2     31422           clock_gettime
  0.09    0.030924         462        67           fsync
  0.01    0.003212         321        10           madvise
  0.01    0.003026          22       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   33.742080                 42195      3651 total

 

This time I noticed additional calls of fsync() to guarantee writes on blocks – 67 calls that seem to be related to 67 transactions right? I double checked in my strace output and it seems that is the case but I may be wrong on this point so please feel free to comment. Another interesting point is that I also continued to notice asynchronous IO from io_getevents function that appeared from my trace. That make sense for me. Writes of data are asynchronous while those on transaction logs are synchronous by design. In this mode fsync() is triggered during transaction commits and checkpoints.

 

  • Enabling trace flag 3979

Enabling trace flag 3979 has effect to disable the forced flush behavior replaced by writethrough and alternatewritethrough options. Referring to the Microsoft article the former will translate the well-known FILE_FLAG_WRITE_THROUGH flag requests into O_DSYNC opens but with some performance optimization stuff by using fdatasync() rather than fsync() Indeed, fdatasync() is supposed to generate less I/O activity because it doesn’t require to synchronize file metadata (only the data portion of the file is concerned here).

Anyway, my strace sample file output below confirmed that both data file and transaction log were open with both O_DIRECT and O_DSYNC meaning we are bypassing the kernel buffer space and we are also forcing synchronous I/O for both files.  Does it matter? Well, from my understanding writing dirty data pages is still an asynchronous process when checkpoints occur. How much better it performs in this case? I don’t know and it will require likely another bunch of strong tests on different use cases –

5279  11:07:36 open("/u01/sqlserverdata/mssqlserver/dbi_test.mdf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185
…
5279  11:07:38 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185

 

As previously I noticed 67 calls of fdatasync() (and not anymore fsync() here) related likely to my 67 implicit transactions and I still continued to notice asynchronous IO driving by io_getevents() or io_submit() syscalls.

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 73.90   12.678046        2360      5371      1927 futex
 10.80    1.853571        7788       238           epoll_wait
  9.05    1.551962       45646        34        30 restart_syscall
  1.93    0.331275       47325         7         7 rt_sigtimedwait
  1.84    0.316524      105508         3           nanosleep
  1.61    0.276806        4131        67           io_getevents
  0.71    0.121815           7     18259           clock_gettime
  0.06    0.010184         152        67           fdatasync
  0.02    0.003851          26       146           read
  0.02    0.003217          12       272           epoll_ctl
  0.01    0.002139          32        67           io_submit
  0.01    0.002070          15       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.156630                 25170      2054 total

 

 

Finally, at the moment of this write up, let’s say that Microsoft recommends enabling the trace flag 3979 as well as changing the default values of writethrough and alternatewritethrough options to 0 to revert back to old behavior before CU6 illustrated in the first section of this blog post but only in the case your storage guarantees your writes will be “O_DIRECT” safe. I think you may understand why now – if my understanding of Linux I/O is obviously correct- :)

See you!

 

 

Cet article SQL Server on Linux – I/O internal thoughts est apparu en premier sur Blog dbi services.

Banks to Offer Greater Insight into Accounts for Global Corporate Customers with Oracle

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Banks to Offer Greater Insight into Accounts for Global Corporate Customers with Oracle New Oracle Banking Virtual Account Management simplifies transaction banking, enables operational efficiencies

Redwood Shores, Calif.—Jul 24, 2018

Oracle Financial Services new platform enables banks’ corporate customers to more efficiently manage and monitor their banking accounts. As corporate businesses expand globally, their operations become exponentially more complex. They often have many separate accounts that are used to process accounts payable and receivable and in multiple currencies, for example. They also must contend with dynamic market conditions and evolving regulatory requirements. Oracle Banking Virtual Account Management helps corporate businesses rationalize their complex accounting structures and simplify account management processes, such as a centralizing their accounts using virtual account management (VAM). VAMs are non-physical accounts that can be used by corporate businesses to optimize their working capital processes. With Oracle’s VAM solution, banks can leverage a Virtual Account Identifier and gain the flexibility to maintain different rules for different corporate customers.

“Oracle is committed to building industry-first corporate banking capabilities that are digitally- enabled and exclusively designed for corporate banks,” said Chet Kamat, Senior Vice President, Banking, Oracle Financial Services. “To strengthen the relationship with their corporate customers, banks need to offer smarter transaction banking alternatives. Virtual accounts are an ideal way to give corporate customers access to real-time information and enable faster decision-making.”

Oracle Banking Virtual Account Management solution enables banks to centralize cash and liquidity and better manage transaction flows for corporate customers. This reduction in account complexity provides greater visibility and control of assets. Oracle’s VAM solution can facilitate easy segregation of inflow and outflow of funds and enables seamless and efficient reconciliation, which allows corporate customers to optimize working capital in a cost-effective manner. With Virtual Account Identifiers banks can enable corporate customers to set up their own rules for payment routing that best suits their business needs. Corporate businesses can now easily identify the paying customer and the purpose of payments.

The solution also enables banks to provide their customers the ability to define and manage their banking account structure at their convenience, in real-time. Customers can take charge of account structures across business units, geographies, and enjoy a friction-free banking experience. The solution also offers RESTFul APIs enabling banks to provide user experiences specific to customer needs.

Oracle Banking Virtual Account Management is pre-integrated with Oracle Banking Liquidity Management, Oracle Banking Payments and Oracle FLEXCUBE, which will helps banks offer comprehensive support for virtual accounts. The solution is also capable of working with any core banking system and banks can easily interface with corporate ERP systems and configure their customers’ banking requirements based on distinct needs.

Contact Info
Judi Palmer
Oracle Corporation
+1 650 607 6598
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 607 6598

Brian Pitts

  • +1 312 475 5921

Retail Consult Becomes Oracle PartnerNetwork Platinum Level Partner

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Retail Consult Becomes Oracle PartnerNetwork Platinum Level Partner Oracle Recognizes Retail Consult for Their Track Record of Success to Deliver Global Retail Transformation

Redwood Shores, Calif.—Jul 24, 2018

Retail Consult, a highly specialized group focused on technology solutions for retail, has achieved Platinum partner status in Oracle Partner Network (OPN).  By attaining Platinum level membership, Oracle has recognized Retail Consult for its in-depth expertise and excellence in delivering the Oracle Retail Solution Portfolio.

Retailers are under pressure to transform their business with solutions that will help them stay ahead of the competition. Retail Consult has demonstrated an extremely high success rate helping clients to successfully overcome the inherent risks with digital transformation projects.  Retail Consult provides solutions strategy, implementation, deployment, training and support services. These services combined with the depth and breadth of the Oracle Retail solutions has been a recipe for client success around the world.

Retail Consult has established its depth and breadth of the expertise across the Oracle Retail portfolio including Oracle Retail Merchandise Operations Management, Oracle Retail Planning and Optimization, Oracle Retail Supply Chain Management, Oracle Retail Omnichannel, Oracle Commerce and Oracle Retail Insights. Many of Retail Consult’s resources are trained and certified to implement Oracle’s state-of-the art solutions that uniquely addresses the challenges of retailers today.

“The breadth and depth of the Oracle Retail solution, combined with our retail business and technology expertise deliver an unbeatable combination for client success in their retail transformation projects,” Silvia Gomes, Partner, Retail Consult.

“Retail Consult is a valuable partner to Oracle Retail. They have a proven track record of success delivery to our customer base,” said Jeff Warren, Vice President of Solution Management, Oracle Retail. “Retail Consult consistently contributes to the growth and education of our community.”

Examples of Exceptional Delivery of the Oracle Retail solutions by Retail Consult include:

 

With its Platinum status, Retail Consult, receives the high level of engagement, commitment and resources available to OPN partners. Platinum members receive dedicated virtual account management support to build joint development plans and help broaden Specialization areas and revenue opportunities.  Additional benefits include priority placement in the OPN Solutions Catalog, one free application integration validated by Oracle, joint marketing and sales opportunities, discounted training and more. For more information about the benefits of becoming an OPN Platinum level partner, please visit: http://www.oracle.com/us/partnerships/index.htm

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Retail Consult

Retail Consult is a group of professionals who specialize in technology solutions for retail, offering clients global perspective and experience with operations in Europe, North, South and Central America. Retail Consult provides solutions strategy, implementation, deployment, training and support services. They serve clients across a range of retail segments including fashion, grocery, pharmacy, do it yourself, telecommunications, and electronics. In size, their clients range from regional, privately held companies to global retailers deploying multiple brands. Retail Consult is unique because they have an extremely high success rate for implementing Oracle Retail Solutions. Their values and mission drive them to be a strategic partner to all clients. http://www.retail-consult.com

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Foodation Selects Oracle Food and Beverage to Drive International Growth and Innovation

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Foodation Selects Oracle Food and Beverage to Drive International Growth and Innovation Oracle Hospitality Simphony Cloud Helps Streamline Business Operations, Accelerate Speed of Service, and Enhance Customer Experience

Redwood Shores, Calif.—Jul 24, 2018

Foodation, an Italian company that specializes in designing and developing fast-casual restaurants across Europe, has chosen Oracle Hospitality Simphony Cloud for its restaurants, including the popular Briscola Pizza Society, Polpa Burger Trattoria and Mariù Italian Kebab. With 10 restaurants and plans for international expansion, Foodation needed a technology platform that could support the strategic growth of its business and offer a single view of operations across all of its restaurants. Additionally, the company wanted to increase operational efficiency through centralization, improve functionality and modernize the customer experience.

“Over the next five years, we not only plan to expand our business internationally, but we also want to position our brands as the go-to restaurants across Europe, and Oracle Hospitality Simphony Cloud is helping us to achieve this goal,” said Riccardo Cortese, CEO of Foodation. “Oracle Simphony Cloud provides us with a reliable and trusted foundation for growth and enables us to streamline our operations with one single view of business. Before the implementation, we only had insight into business operations data on a weekly basis, but now, using the simple, easy-to-use Oracle Micros InMotion app, we have a single view of all of our restaurants that we can view as frequently as we like.”

Oracle Simphony Cloud provides Foodation with one centralized system to manage operations across all six brands, allowing centralized reporting and auditing for the management team, together with drill-down capabilities to real-time order-specific information. This minute-by-minute detail enables maximized revenue and profitability. In addition, Oracle Simphony Cloud integrates all digital channels, such as mobile payment, mobile ordering and mobile applications, for one centralized view of the customer, helping to accelerate speed of service and enhance the overall customer experience.

“In today’s competitive market, restaurants are challenged to operate not only faster, but smarter, in order to meet customer’s rising expectations,” said Chris Adams, vice president strategy, Oracle Food and Beverage. “Oracle Simphony Cloud enables business operations to flow more cohesively, efficiently and profitably while greatly enhancing the overall customer experience.”

For more information learn how Foodation Serves Up A Growing Pizza Enterprise with Oracle Simphony, One Slice at a Time.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Corporate Banks Can Drive Greater Digital Customer Engagement with New Oracle Solutions

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Corporate Banks Can Drive Greater Digital Customer Engagement with New Oracle Solutions New Corporate Banking Solutions Enable Smarter Operations for Credit Facilities, Corporate Lending and Trade Finance

Redwood Shores, Calif.—Jul 24, 2018

Oracle Financial Services launched three new corporate banking engagement solutions designed to meet the needs of banks looking to digitize their processes. Oracle Banking Credit Facilities Process Management, Oracle Banking Corporate Lending Process Management and Oracle Banking Trade Finance Process Management can enhance banks’ customer relationships, improve staff productivity and reduce costs.

“Oracle has long been committed to providing the banking sector with innovative financial solutions,” said Chet Kamat, Senior Vice President, Banking for Oracle Financial Services. “Harnessing the power of digitization for corporate banking operations is a step towards enabling banks to deliver exceptional value to their corporate customers. We have equipped them with purposeful insight for better decision-making, enabled standardization of processes to enhance distinct operational capabilities, and empowered them with real-time collaboration and intelligent automation to achieve higher levels of efficiency.”

The traditional approach in corporate banking has been largely concentrated on product processor-oriented work, backed by reams of paperwork. The new solutions will combat labor-intensive processes and digitize them.

Accelerate Digitization

With the launch of Oracle Banking Credit Facilities Process Management banks can accelerate credit origination and servicing, pre-qualify credit lines, track exposures to customers in real-time, and mitigate business risks.

Oracle Banking Corporate Lending Process Management enables banks to meet customers’ on-demand and custom financing needs by accelerating the process of loan origination, servicing and closure.

Oracle Banking Trade Finance Process Management helps banks manage the end-to-end trade finance lifecycle for a wide range of trade services such as guarantees, documentary credit and collections.

Oracle’s corporate banking engagement solutions offer greater productivity for banks with many key features:

  • Host-agnosticism
  • Configurable workflow automation
  • Role-based dashboards
  • Intuitive UI
  • Multi-device support
  • Extensibility
  • Real time alerting
  • Reporting
 

In addition, widgets provide intelligent operational and analytical data addressing the needs of sales, operations, risk and legal teams. These solutions are also capable of exposing APIs and providing a front-to-back integration, which enables banks to build an extensible banking platform and enhance their service offerings.

Contact Info
Judi Palmer
Oracle Corporation
+1 650 784 7901
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Brian Pitts

  • +1 312 475 5921

"ORA-22902: CURSOR expression not allowed" in ORDS and APEX and how to fix them

Dimitri Gielis - Tue, 2018-07-24 06:18
When you want to define different blocks of data, some hierarchical, some not, you can do that by using the cursor expressions in SQL. An example of a query looks like this:

  select
    c.cust_first_name,
    c.cust_last_name,
    c.cust_city,
    cursor(select o.order_total, order_name,
              cursor(select p.product_name, i.quantity, i.unit_price
                       from demo_order_items i, demo_product_info p
                      where o.order_id = i.order_id
                        and i.product_id = p.product_id
                    ) product                
             from demo_orders o
            where c.customer_id = o.customer_id
          ) orders
  from demo_customers c

In the above query you see you can nest the cursor expressions. But you can also define the cursors next to each other. We use this technique a lot when defining where the data comes from in the APEX Office Print (AOP) APEX plugin:


In Oracle Application Express 18.1 there's a small bug (Bug 28298260 - REGRESSION: SQL QUERY CONTAINING CURSOR EXPRESSION CAN'T BE PARSED) that when you validate the query you get "ORA-22902: CURSOR expression not allowed".

The APEX Dev team already fixed it - you can download from Oracle Support the bundle PSE patch #28128115. Once applied everything is validating correctly again.


AOP also supports REST web services, and some people define those in ORDS (Oracle REST Data Services). Depending the version of ORDS you might get the same error: "Error during evaluation of resource template: GET test/cursor/, SQL Error Code: 22,902, SQL Error Message: ORA-22902: CURSOR expression not allowed"



This doesn't mean your environment can not use the cursor syntax, you just have to set the pagination size to 0 and it's fixed.

In the latest version of ORDS (18.2) you get by default the 500 error without the error number:


Fix is the same, set pagination to 0 and you are good to go.

Categories: Development

Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries

Yann Neuhaus - Tue, 2018-07-24 05:57

Good news, the latest Patchset for Oracle 12cR2 (which is not named patchset anymore, is actually called release 18c and numbered 18.0.0.0.0) is available for download on OTN. It is great because OTN download does not require access to Support and Software Updates. It is available to anybody under the Free Developer License Terms (basically development, testing, prototyping, and demonstrating for an application that is not in production and for non-commercial use). We all complained about the ‘Cloud First’ strategy because we were are eager to download the latest version. But the positive aspect of it is that we have now on OTN a release that has been stabilized after a few release updates. In the past, only the first version of the latest release was available there. Now we have one with many bug fixed.

Of course, I didn’t wait and I have tested 18c as soon as it was available on the Oracle Cloud thanks to the ACE Director program that provided me with some Cloud Credits. In this post, I’ll update my Cloud database to run it with the on-premises binary. Because that’s the big strength of Oracle: we can run the same software, 100% compatible, on the Cloud and on our own servers. There are some limitations in the features available, but technically it is the same software.

Oracle Cloud First

Here is my Cloud version of Oracle 18c installed on February (18.1) updated on April (18.2) and July (18.3):

SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 
SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> select banner_full from v$version;
 
BANNER_FULL
----------------------------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> select banner_legacy from v$version;
 
BANNER_LEGACY
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

LINUX.X64_180000_db_home.zip

I have installed the on-premises 18c available on OTN. The good things with the new releases are:

  • No need to extract installer files. Just unzip the Oracle Home and link the executable
  • This Oracle Home image already includes the latest Release Updates


SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 

We have 4 updates from July here for the following components:

  • The Database (28090523)
  • The Java in the Oracle Home, aka JDK (27908644)
  • The Java in the database, aka OJVM (27923415)
  • The clusterware component for the database to match the CRS, aka OCW (28090553)

So, now we have an image of the Oracle Home which already contains all the latest updates… except one:

$ cat $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/version.properties
 
COMPANY=Oracle
PRODUCT=SQL Developer
VERSION=17.30003410937f
VER=17.3.2
VER_FULL=17.3.2.341.0937
BUILD_LABEL=341.0937
BUILD_NUM=341.0937
EDITION=

Unfortunately, that’s an old version of SQL Developer here, and with no SQLcl. Then just download this additional one and unzip it in the Oracle Home.

DataPatch

So, what happens when I open the database that I have created on 18.1 and patched with 18.2 and 18.3 RUs on the Oracle Cloud? There are two updates for the database (DBRU and OJVM). The DBRU is already there then DataPatch has only to apply the OJVM:

[oracle@VM183x dbhome_1]$ $ORACLE_HOME/OPatch/datapatch
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 10:57:55 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
 
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_11104_2018_07_24_10_57_5 5/sqlpatch_invocation.log
 
Connecting to database...OK
Gathering database info...done
 
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
 
Bootstrapping registry and package to current versions...done
Determining current state...done
 
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Not installed
PDB PDB$SEED: Not installed
PDB PDB1: Not installed
 
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.20.321353 AM
PDB PDB$SEED:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.21.802495 AM
PDB PDB1:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.23.230513 AM
 
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08. log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDBSEED_2018Jul24_10_58_56. log (no errors)
Patch 27923415 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDB1_2018Jul24_10_58_56.log (no errors)
SQL Patching tool complete on Tue Jul 24 10:59:21 2018

Now here is the history of patches:

SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
Patch Id : 27923415
Action : APPLY
Action Time : 24-JUL-2018 10:59:19
Description : OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

This is all good. Despite the different release schedules, the level of software is exactly the same. And we can start on-premises on a release with low regression risk (18c like a patchset) but many fixes (several release updates). For the moment only the Linux port is there. The other platforms should come this summer.

 

Cet article Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries est apparu en premier sur Blog dbi services.

Oracle Database 18c silent installation and database creation on Oracle Linux

Pierre Forstmann Oracle Database blog - Mon, 2018-07-23 16:59

Today 23rd of July 2018 Oracle Corp. has released Oracle 18c for Linux on OTN.

Silent Oracle Database installation

This is my first 18c installation on a new Oracle Linux 7.5 virtual machine (VM) using a minimal Linux installation:

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)
# cat /etc/os-release 
NAME="Oracle Linux Server"
VERSION="7.5"
ID="ol"
VERSION_ID="7.5"
PRETTY_NAME="Oracle Linux Server 7.5"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:5:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.5
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.5
# 

This VM has 4 GB of RMAN (and 4 GB of swap space), one single 40 GB disk and is connected to internet to be able to access Oracle Linux public yum repository.

I have enabled in /etc/yum.repos.d/public-yum-ol7.repo:

[ol7_u5_base]
name=Oracle Linux $releasever Update 5 installation media copy ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/5/base/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

I have checked YUM repositories with:

# yum repolist                     
Loaded plugins: ulninfo
ol7_UEKR4/x86_64/primary                                       |  40 MB  00:00:51     
ol7_UEKR4                                                                     732/732
ol7_latest                                                                  8173/8173
ol7_u5_base                                                                 7278/7278
repo id            repo name                                                    status
ol7_UEKR4/x86_64   Latest Unbreakable Enterprise Kernel Release 4 for Oracle Li   732
ol7_latest/x86_64  Oracle Linux 7Server Latest (x86_64)                         8 173
ol7_u5_base/x86_64 Oracle Linux 7Server Update 5 installation media copy (x86_6 7 278
repolist: 16 183

I have checked that Oracle 18c preinstallation RPM is available:

# yum info *preinstall* | grep Name 
Name        : oracle-database-preinstall-18c
Name        : oracle-database-server-12cR2-preinstall
Name        : oracle-rdbms-server-11gR2-preinstall
Name        : oracle-rdbms-server-12cR1-preinstall

I have installed 18c preinstallation RPM to ease Oracle installation using Internet connection. I have connected as root and run:

# yum -y install  oracle-database-preinstall-18c

Note that preinstallation RPM has also created oracle account:

# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

I have downloaded from OTN Oracle 18c installation media:

$ ls -rtl
total 4457668
-rw-r--r--. 1 oracle oinstall 4564649047 Jul 23 21:54 LINUX.X64_180000_db_home.zip

I have created a target Oracle Home directrory with root account:

# mkdir -p /u01/db18c
# chown oracle:dba /u01/db18c

I have unzipped installation media into this directory:

$ cd /u01/db18c
$ unzip /stage/LINUX.X64_180000_db_home.zip 

I have created Oracle Base and Oracle Inventory directories with root account:

# mkdir /u01/base
# chown oracle:dba /u01/base
# mkdir /u01/orainv
# chown oracle:dba /orainv

I have run following script from /u01/db18c/ with oracle account:

./runInstaller \
-silent \
-responseFile /u01/db18c/install/response/db_install.rsp \
   oracle.install.option=INSTALL_DB_SWONLY             \
   UNIX_GROUP_NAME=oinstall                            \
   INVENTORY_LOCATION=/u01/orainv                      \
   SELECTED_LANGUAGES=en                               \
   ORACLE_HOME=/u01/db18c                              \
   ORACLE_BASE=/u01/base                               \
   oracle.install.db.InstallEdition=EE                 \
   oracle.install.db.isCustomInstall=false             \
   oracle.install.db.OSDBA_GROUP=dba                   \
   oracle.install.db.OSBACKUPDBA_GROUP=dba             \
   oracle.install.db.OSDGDBA_GROUP=dba                 \
   oracle.install.db.OSKMDBA_GROUP=dba                 \
   oracle.install.db.OSRACDBA_GROUP=dba                \
   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false          \
   DECLINE_SECURITY_UPDATES=true

Running the script has generated following output:

Launching Oracle Database Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. installActions2018-07-23_10-22-54PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: installActions2018-07-23_10-22-54PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/db18c/install/response/db_2018-07-23_10-22-54PM.rsp

You can find the log of this install session at:
 /tmp/InstallActions2018-07-23_10-22-54PM/installActions2018-07-23_10-22-54PM.log

As a root user, execute the following script(s):
	1. /u01/orainv/orainstRoot.sh
	2. /u01/db18c/root.sh

Execute /u01/orainv/orainstRoot.sh on the following nodes: 
[ol7defs0]
Execute /u01/db18c/root.sh on the following nodes: 
[ol7defs0]


Successfully Setup Software with warning(s).
Moved the install session logs to:
 /u01/orainv/logs/InstallActions2018-07-23_10-22-54PM

I have ignored following warning:

INFO:  [Jul 23, 2018 10:23:12 PM] ------------------List of failed Tasks------------------
INFO:  [Jul 23, 2018 10:23:12 PM] *********************************************
INFO:  [Jul 23, 2018 10:23:12 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO:  [Jul 23, 2018 10:23:12 PM] Severity:IGNORABLE
INFO:  [Jul 23, 2018 10:23:12 PM] OverallStatus:VERIFICATION_FAILED
INFO:  [Jul 23, 2018 10:23:12 PM] -----------------End of failed Tasks List----------------

I have run with user root:

# /u01/orainv/orainstRoot.sh
Changing permissions of /u01/orainv.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/orainv to oinstall.
The execution of the script is complete.
# /u01/db18c/root.sh
Check /u01/db18c/install/root_ol7defs0_2018-07-23_22-31-41-412574476.log for the output of root script
# cat /u01/db18c/install/root_ol7defs0_2018-07-23_22-31-41-412574476.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/db18c
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Oracle Trace File Analyzer (TFA) is available at : /u01/db18c/suptools/tfa/release/tfa_home/bin/tfactl 

I have checked detailed Oracle Database version:

$ export ORACLE_HOME=/u01/db18c
$ export PATH=$ORACLE_HOME/bin:$PATH
$ $ORACLE_HOME/OPatch/opatch lsinv
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/db18c
Central Inventory : /u01/orainv
   from           : /u01/db18c/oraInst.loc
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : /u01/db18c/cfgtoollogs/opatch/opatch2018-07-23_22-35-16PM_1.log

Lsinventory Output file location : /u01/db18c/cfgtoollogs/opatch/lsinv/lsinventory2018-07-23_22-35-16PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: ol7defs0
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1): 

Oracle Database 18c                                                  18.0.0.0.0
There are 1 products installed in this Oracle Home.


Interim patches (4) :

Patch  27908644     : applied on Wed Jul 18 19:44:11 CEST 2018
Unique Patch ID:  22153180
Patch description:  "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
   Created on 4 May 2018, 01:21:02 hrs PST8PDT
   Bugs fixed:
     27908644

Patch  27923415     : applied on Wed Jul 18 19:41:38 CEST 2018
Unique Patch ID:  22239273
Patch description:  "OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)"
   Created on 15 Jul 2018, 10:33:22 hrs PST8PDT
   Bugs fixed:
     27304131, 27539876, 27952586, 27642235, 27636900, 27461740

Patch  28090553     : applied on Wed Jul 18 19:40:01 CEST 2018
Unique Patch ID:  22256940
Patch description:  "OCW RELEASE UPDATE 18.3.0.0.0 (28090553)"
   Created on 11 Jul 2018, 19:20:31 hrs PST8PDT
   Bugs fixed:
     12816839, 18701017, 22734786, 23698980, 23840305, 25709124, 25724089
     26299684, 26313403, 26433972, 26527054, 26586174, 26587652, 26647619
     26827699, 26860285, 26882126, 26882316, 26943660, 26996813, 27012915
     27018734, 27032726, 27034318, 27040560, 27080748, 27086406, 27092991
     27098733, 27106915, 27114112, 27121566, 27133637, 27144533, 27153755
     27166715, 27174938, 27174948, 27177551, 27177852, 27182006, 27182064
     27184253, 27204476, 27212837, 27213140, 27220610, 27222423, 27222938
     27238077, 27238258, 27249544, 27252023, 27257509, 27263677, 27265816
     27267992, 27271876, 27274143, 27285557, 27299455, 27300007, 27302415
     27309182, 27314512, 27315159, 27320985, 27334353, 27338838, 27346984
     27358232, 27362190, 27370933, 27377219, 27378959, 27379846, 27379956
     27393421, 27398223, 27399499, 27399762, 27399985, 27401618, 27403244
     27404599, 27426277, 27428790, 27430219, 27430254, 27433163, 27452897
     27458829, 27465480, 27475272, 27481406, 27481765, 27492916, 27496806
     27503318, 27503413, 27508936, 27508984, 27513114, 27519708, 27526362
     27528204, 27532009, 27534289, 27560562, 27560735, 27573154, 27573408
     27574335, 27577122, 27579969, 27581484, 27593587, 27595801, 27600706
     27609819, 27625010, 27625050, 27627992, 27654039, 27657467, 27657920
     27668379, 27682288, 27691717, 27702244, 27703242, 27708711, 27714373
     27725967, 27731346, 27734470, 27735534, 27739957, 27740854, 27747407
     27748321, 27757979, 27766679, 27768034, 27778433, 27782464, 27783059
     27786669, 27786699, 27801774, 27811439, 27839732, 27850736, 27862636
     27864737, 27865439, 27889841, 27896388, 27897639, 27906509, 27931506
     27935826, 27941514, 27957892, 27978668, 27984314, 27993298, 28023410
     28025398, 28032758, 28039471, 28039953, 28045209, 28099592, 28109698
     28174926, 28182503, 28204423, 28240153

Patch  28090523     : applied on Wed Jul 18 19:39:24 CEST 2018
Unique Patch ID:  22329768
Patch description:  "Database Release Update : 18.3.0.0.180717 (28090523)"
   Created on 14 Jul 2018, 00:03:50 hrs PST8PDT
   Bugs fixed:
     9062315, 13554903, 21547051, 21766220, 21806121, 23003564, 23310101
     24489904, 24689376, 24737581, 24925863, 25035594, 25035599, 25287072
     25348956, 25634405, 25726981, 25743479, 25824236, 25943740, 26226953
     26336101, 26423085, 26427905, 26450454, 26476244, 26598422, 26615291
     26646549, 26654411, 26731697, 26785169, 26792891, 26818960, 26822620
     26843558, 26843664, 26846077, 26894737, 26898279, 26928317, 26933599
     26956033, 26961415, 26966120, 26986173, 26992964, 27005278, 27026401
     27028251, 27030974, 27036408, 27038986, 27041253, 27044575, 27047831
     27053044, 27058530, 27060167, 27060859, 27061736, 27066451, 27066519
     27073066, 27086821, 27090765, 27101527, 27101652, 27110878, 27112686
     27119621, 27126666, 27128580, 27135647, 27143756, 27143882, 27147979
     27153641, 27155549, 27156355, 27163928, 27169796, 27181521, 27181537
     27189611, 27190851, 27193810, 27199245, 27208953, 27210038, 27210872
     27214085, 27215007, 27216224, 27221900, 27222121, 27222626, 27224987
     27226913, 27232983, 27233563, 27236052, 27236110, 27240246, 27240570
     27241221, 27241247, 27244337, 27244785, 27249215, 27250547, 27254851
     27258578, 27259386, 27259983, 27262650, 27262945, 27263276, 27263996
     27270197, 27274456, 27274536, 27275136, 27275776, 27282707, 27283029
     27283960, 27284499, 27285244, 27288230, 27292213, 27294480, 27301308
     27301568, 27302594, 27302681, 27302695, 27302711, 27302730, 27302777
     27302800, 27302960, 27304410, 27304936, 27305318, 27307868, 27310092
     27313687, 27314206, 27314390, 27318869, 27321179, 27321834, 27326204
     27329812, 27330158, 27330161, 27333658, 27333664, 27333693, 27334316
     27334648, 27335682, 27338912, 27338946, 27339115, 27339396, 27339483
     27339495, 27341036, 27345190, 27345231, 27345450, 27345498, 27346329
     27346644, 27346709, 27346949, 27347126, 27348081, 27348707, 27349393
     27352600, 27354783, 27356373, 27357773, 27358241, 27359178, 27359368
     27360126, 27364891, 27364916, 27364947, 27365139, 27365702, 27365993
     27367194, 27368850, 27372756, 27375260, 27375542, 27376871, 27378103
     27379233, 27381383, 27381656, 27384222, 27389352, 27392187, 27395404
     27395416, 27395794, 27396357, 27396365, 27396377, 27396624, 27396666
     27396672, 27396813, 27398080, 27398660, 27401637, 27405242, 27405696
     27410300, 27410595, 27412805, 27417186, 27420715, 27421101, 27422874
     27423251, 27425507, 27425622, 27426363, 27427805, 27430802, 27432338
     27432355, 27433870, 27434050, 27434193, 27434486, 27434974, 27435537
     27439835, 27441326, 27442041, 27444727, 27445330, 27445462, 27447452
     27447687, 27448162, 27450355, 27450400, 27450783, 27451049, 27451182
     27451187, 27451531, 27452760, 27453225, 27457666, 27457891, 27458164
     27459909, 27460675, 27467543, 27469329, 27471876, 27472969, 27473800
     27479358, 27483974, 27484556, 27486253, 27487795, 27489719, 27496224
     27496308, 27497950, 27498477, 27501327, 27501413, 27501465, 27502420
     27504190, 27505603, 27506774, 27508985, 27511196, 27512439, 27517818
     27518227, 27518310, 27520070, 27520900, 27522245, 27523368, 27523800
     27525909, 27532375, 27533819, 27534509, 27537472, 27544030, 27545630
     27547732, 27550341, 27551855, 27558557, 27558559, 27558861, 27560702
     27563629, 27563767, 27570318, 27577758, 27579353, 27580996, 27585755
     27585800, 27586810, 27586895, 27587672, 27591842, 27592466, 27593389
     27595973, 27599689, 27602091, 27602488, 27603841, 27604293, 27607805
     27608669, 27610269, 27613080, 27613247, 27615608, 27616657, 27617522
     27625274, 27625620, 27631506, 27634676, 27635508, 27644757, 27649707
     27652302, 27663370, 27664702, 27679488, 27679664, 27679806, 27679961
     27680162, 27680509, 27682151, 27688099, 27688692, 27690578, 27691809
     27692215, 27693713, 27697092, 27701795, 27705761, 27707544, 27709046
     27718914, 27719187, 27723002, 27726269, 27726780, 27732323, 27739006
     27740844, 27744211, 27745220, 27747869, 27748954, 27751006, 27753336
     27757567, 27772815, 27773602, 27774320, 27774539, 27779886, 27780562
     27782339, 27783289, 27786772, 27791223, 27797290, 27803665, 27807441
     27812560, 27812593, 27813267, 27815347, 27818871, 27832643, 27833369
     27834984, 27840386, 27847259, 27851757, 27861909, 27869339, 27873643
     27882176, 27892488, 27924147, 27926113, 27930478, 27934468, 27941896
     27945870, 27950708, 27952762, 27961746, 27964051, 27970265, 27971575
     27984028, 27989849, 27993289, 27994333, 27997875, 27999597, 28021205
     28022847, 28033429, 28057267, 28059199, 28072130, 28098865, 28106402
     28132287, 28169711, 28174827, 28184554, 28188330, 25929650, 28264172



--------------------------------------------------------------------------------

OPatch succeeded.
$ $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.

So this 18c version has some of the July 2018 release updates (180717 = 17-JUL-2018).

I have also checked SQL*Plus banner:

$ sqlplus -v

SQL*Plus: Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
$ 
Silent database creation

I have created directories for databases with root account:

# mkdir /u01/oradata
# chown oracle:dba /u01/oradata
# mkdir /u01/fra
# chown oracle:dba /u01/fra

I have used following script to create a container database named CDB with one pluggable database:

dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbName CDB \
-sid CDB \
-createAsContainerDatabase true \
-numberOfPdbs 1 \
-pdbName pdb \
-pdbadminUsername pdba \
-pdbadminPassword oracle \
-SysPassword oracle \
-SystemPassword oracle \
-emConfiguration NONE \
-storageType FS \
-datafileDestination /u01/oradata \
-recoveryAreaDestination /u01/fra \
-recoveryAreaSize 3200  \
-characterSet AL32UTF8 \
-memoryPercentage 40 \
-enableArchive true \
-redoLogFileSize 100

Output is:

WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (2,446 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
[WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'PDBADMIN' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (3,309 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
Prepare for db operation
8% complete
Copying database files
31% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
43% complete
46% complete
Completing Database Creation
51% complete
53% complete
54% complete
Creating Pluggable Databases
58% complete
77% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/base/cfgtoollogs/dbca/CDB.
Database Information:
Global Database Name:CDB
System Identifier(SID):CDB
Look at the log file "/u01/base/cfgtoollogs/dbca/CDB/CDB.log" for further details.

I have checked created database with:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
	 0

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------

SQL> select patch_id, status, description, action_time from dba_registry_sqlpatch;

  PATCH_ID STATUS		     DESCRIPTION								      ACTION_TIME
---------- ------------------------- -------------------------------------------------------------------------------- ------------------------------
  28090523 SUCCESS		     Database Release Update : 18.3.0.0.180717 (28090523)			      23-JUL-18 10.57.56.127734 PM
  27923415 SUCCESS		     OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)				      23-JUL-18 10.57.56.142065 PM

SQL> select name, cdb, log_mode from v$database;

NAME	  CDB LOG_MODE
--------- --- ------------
CDB	  YES ARCHIVELOG

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO

I have used following script to create a non-container database named NCDB with:

dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbName NCDB \
-sid NCDB \
-createAsContainerDatabase false \
-SysPassword oracle \
-SystemPassword oracle \
-emConfiguration NONE \
-storageType FS \
-datafileDestination /u01/oradata \
-recoveryAreaDestination /u01/fra \
-recoveryAreaSize 3200  \
-characterSet AL32UTF8 \
-memoryPercentage 40 \
-enableArchive true \
-redoLogFileSize 100 

Output is:

[WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (2,402 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
Prepare for db operation
10% complete
Copying database files
40% complete
Creating and starting Oracle instance
42% complete
46% complete
50% complete
54% complete
60% complete
Completing Database Creation
66% complete
69% complete
70% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/base/cfgtoollogs/dbca/NCDB.
Database Information:
Global Database Name:NCDB
System Identifier(SID):NCDB
Look at the log file "/u01/base/cfgtoollogs/dbca/NCDB/NCDB.log" for further details.

I have checked created database with:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
	 0

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------

SQL> select patch_id, status, description, action_time from dba_registry_sqlpatch;

  PATCH_ID STATUS		     DESCRIPTION								      ACTION_TIME
---------- ------------------------- -------------------------------------------------------------------------------- ------------------------------
  28090523 SUCCESS		     Database Release Update : 18.3.0.0.180717 (28090523)			      23-JUL-18 11.33.28.278762 PM
  27923415 SUCCESS		     OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)				      23-JUL-18 11.33.28.325217 PM

SQL> select name, cdb, log_mode from v$database;

NAME	  CDB LOG_MODE
--------- --- ------------
NCDB	  NO  ARCHIVELOG

SQL> show pdbs
SQL> 
Conclusion

As documented Oracle Database 18c installation is now image based like Grid Infrastructure 12.2.0.1: response file and parameters for software installation and database creation are very similar to Oracle 12.2.0.1.

Oracle Corp. has decided to release on OTN a patched version: this is something new for a new release (it was possible to have patched versions but only in preinstalled VMs like the Hands-On Lab for Upgrade).

However at the time of writing I have not found the announced RPM to install Oracle 18c on Linux.

Categories: DBA Blogs

Using BULK COLLECT/FORALL

Tom Kyte - Mon, 2018-07-23 15:06
I have a procedure that was previously using a "slow-by_slow" type procedure. I've converted it to BULK COLLECT and FORALL (test_cur, see livesql). The procedure is obtaining data from 2 tables then inserts this data into 2 corresponding tables. ...
Categories: DBA Blogs

API for context

Tom Kyte - Mon, 2018-07-23 15:06
Hi! What is the best way to get context value in Oracle? We have sys_context, which allows get value from context, but it forces to hardcode constants for namespaces and keys. Is there more graceful solution for it? For myself I've made a sepa...
Categories: DBA Blogs

AWR and ASH guide lines.

Tom Kyte - Mon, 2018-07-23 15:06
Dear Team, Please share some bokks details for AWR and ASH report analysis in depth. Thanks Pradeep
Categories: DBA Blogs

SQL Query optimization : is scalar subqueries killing the performance?

Tom Kyte - Mon, 2018-07-23 15:06
I need help on query optimization. The below-mentioned query is just a sample. Actual query has 50+ scalar subqueries. Base table table_xyz has over 5 million records. <code>SELECT id, seq, ( SELECT functio...
Categories: DBA Blogs

Random date between two date ranges

Tom Kyte - Mon, 2018-07-23 15:06
I need to pick one random date per month from July 2017 to June 2018. I have the below mentioned code which is only selecting them for entire 2017 Calendar year whereas I need it from July 2017 to June 2018. Any advise? with tst as ( Select lev...
Categories: DBA Blogs

table delete,truncate

Tom Kyte - Mon, 2018-07-23 15:06
Hi Team, In Our production environment When are checking the report is not working, after to analysis we found that there was master table has been truncated or deleted by somebody. Now we are trying to identify the sql_text against that table....
Categories: DBA Blogs

SOCAR Accelerates International Expansion with NetSuite

Oracle Press Releases - Mon, 2018-07-23 15:00
Press Release
SOCAR Accelerates International Expansion with NetSuite Car sharing startup scales its business with successful launch in Malaysia

KUALA LUMPUR—Jul 23, 2018

SOCAR, a South Korean car sharing startup, has implemented Oracle NetSuite to support its rapid growth and international expansion. With NetSuite, SOCAR has been able to successfully launch its service in Kuala Lumpur, Malaysia, and scale its business to support rapidly growing demand that has seen 72,000 new members added in just five months at the end of January this year. SOCAR has also seen over 24,000 usage hours recorded on its platform per month from Malaysia to date.

First launched in South Korea in 2011, SOCAR is an on-demand, keyless car sharing service that offers customers the convenience of a car without the commitment of ownership. With the SOCAR mobile app, customers can book a car, from anywhere and at any time, and then use it to meet their individual transport needs. After signing up more than 3.4 million members in South Korea and offering over 8,200 cars in 3,200 zones throughout the country, SOCAR decided to launch its service in Kuala Lumpur in January 2018. By partnering with the local government to strategically place cars at major public transport hubs, SOCAR has grown rapidly. To manage this growth and the complex requirements of its business model, SOCAR selected NetSuite.

“The car sharing market is rapidly growing and by reducing car ownership, it can help us live in cities with cleaner air, less traffic and more accessible parks, gyms and schools,” said Leon Foong, CEO, SOCAR. “Our business has grown rapidly in South Korea and we are seeing huge demand in Malaysia. We plan to have more than 1,000 cars available in Malaysia by the end of this year, which is a big increase from the 240 cars we started off with, and NetSuite is playing a huge role in supporting that growth.”

With NetSuite, SOCAR has been able to drive efficiencies, improve decision-making and successfully scale its business. By replacing a manual, Excel-driven process for reconciling and tracking financial transactions, NetSuite has enabled SOCAR to achieve a complete view into critical business information and efficiently manage its high volume of expenses, revenue and sales transaction data. As a result, the SOCAR accounting and finance teams now have accurate cash flow data and access to the general ledger at any time and from anywhere, which is crucial for any business to grow.

“The car sharing market will experience exponential growth in the future, with 35 million users expected to sign up for 25 million hours of driving time each month by 2021,” said Ronen Naishtein, general manager, Asia, HK and TW, Oracle NetSuite. “SOCAR is perfectly positioned to ride that growth and with NetSuite, the SOCAR team has a flexible and integrated approach to support its changing business requirements and expand into overseas markets quicker.”

Contact Info
Suzanne Myerson
Oracle NetSuite
+61 414 101 583
suzanne.myerson@oracle.com
Mizu Chitra
Text100 Singapore
+65 6603 9000
SGNetSuite@text100.com.sg
About SOCAR

SOCAR Malaysia is a car-sharing startup that aims to change the landscape of Malaysian transportation by bringing in a new era of multi-flex mobility. Prior to launching in Malaysia, SOCAR Korea has brought car-sharing to millions of South Koreans and grown its fleet to almost 10,000 cars.

SOCAR Malaysia is the company’s first overseas expansion and aims to give people the convenience of driving a car without the commitment of owning one. With its mobile application, SOCAR will make car-sharing convenient for everyone by allowing members to book and unlock cars on its platform with just a few taps. All cars on the SOCAR platform are keyless and prices are inclusive of petrol and insurance (comprehensive and personal accident coverage). SOCAR Malaysia is the fastest growing B2C car-sharing platform globally having grown its fleet from 0 to 450 cars in just 6 months. Over the next 6 months, the company aims to grow their fleet to over 1,000 cars and bring the convenience and joy of car-sharing to hundreds of thousands of members across Malaysia. Learn more at www.socar.my.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories. For more information, please visit http://www.netsuite.com.sg.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Suzanne Myerson

  • +61 414 101 583

Mizu Chitra

  • +65 6603 9000

Syncing Active Directory users and groups to PostgreSQL

Yann Neuhaus - Mon, 2018-07-23 10:55

A lot of companies use Active Directory to manage their users and groups. What most of this companies also want to do is to manage their database users and groups in Active Directory. PostgreSQL comes with ldap/kerberos authentication by default but does not provide anything that helps with managing users and groups in an external directory. And even for the authentication the user already needs to be existent in PostgreSQL. One tool you might want to have a look at and that helps with this requirement is pg-ldap-sync.

As usual I am using CentOS 7 for the scope of this post. For getting pg-ldap-sync onto the system PostgreSQL needs to be installed as pg_config is expected to be there. Once you have that several packages need to be installed (the openldap-clients is not required but it is handy to have it just in case you want to test some ldapsearch commands against Active Directory):

[root@pgadsync ~]$ yum install -y ruby rubygem-rake rubygems ruby-devel openldap-clients git

pg-ldap-sync can either be installed directly with ruby commands or you can install it from Git:

[postgres@pgadsync ~]$ git clone https://github.com/larskanis/pg-ldap-sync.git
[postgres@pgadsync ~]$ cd pg-ldap-sync
[postgres@pgadsync pg-ldap-sync]$ gem install bundler
[postgres@pgadsync pg-ldap-sync]$ bundle install
[postgres@pgadsync pg-ldap-sync]$ bundle exec rake install
[postgres@pgadsync pg-ldap-sync]$ which pg_ldap_sync 
~/bin/pg_ldap_sync
[postgres@pgadsync pg-ldap-sync]$ cd ..
[postgres@pgadsync ~]$ bin/pg_ldap_sync --help
Usage: bin/pg_ldap_sync [options]
    -v, --[no-]verbose               Increase verbose level
    -c, --config FILE                Config file [/etc/pg_ldap_sync.yaml]
    -t, --[no-]test                  Don't do any change in the database

And then, of course, you need something in the Active Directory for synchronization. In my test Active Directory I create a new “Organizational Unit” called “PostgreSQL”:

Selection_001

Inside this “Organizational Unit” there is a user which is used for authenticating against Active Directory:
Selection_002

Then we have two other “Organizational Units”, one for the PostgreSQL DBAs and one for the groups we’d like to sync:
Selection_003

There are three people in the pgadmins unit:
Selection_004

There is one group in the groups unit:
Selection_005

… and the group has two members:
Selection_006

This is what we want to synchronize to PostgreSQL. The final requirement is that two roles need to be there is PostgreSQL (you’ll notice later why that is important):

postgres@pgbox:/home/postgres/ [PG10] psql -X postgres
psql (10.3)
Type "help" for help.

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# create role ldap_users;
CREATE ROLE
postgres=# create role ldap_groups;
CREATE ROLE
postgres=# \du
                                    List of roles
  Role name  |                         Attributes                         | Member of 
-------------+------------------------------------------------------------+-----------
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# 

With pg-ldap-sync each instance you want to have synchronized needs a separate yaml file like this one:

# With this sample config the distinction between LDAP-synchronized
# groups/users from is done by the membership to ldap_user and
# ldap_group. These two roles has to be defined manally before
# pg_ldap_sync can run.

# Connection parameters to LDAP server
# see also: http://net-ldap.rubyforge.org/Net/LDAP.html#method-c-new
ldap_connection:
  host: 172.22.30.1
  port: 389
  auth:
    method: :simple
    username: CN=pgadsync,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
    password: xxxxx
#  encryption:
#    method: :simple_tls

# Search parameters for LDAP users which should be synchronized
ldap_users:
  base: OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  # LDAP filter (according to RFC 2254)
  # defines to users in LDAP to be synchronized
#  filter: (&(objectClass=person)(objectClass=organizationalPerson)(givenName=*)(sn=*)(sAMAccountName=*))
  filter: (sAMAccountName=*)
  # this attribute is used as PG role name
  name_attribute: sAMAccountName
  # lowercase name for use as PG role name
  lowercase_name: true

# Search parameters for LDAP groups which should be synchronized
ldap_groups:
  base: OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  filter: (cn=dbas)
  # this attribute is used as PG role name
  name_attribute: cn
  # lowercase name for use as PG role name
  lowercase_name: false
  # this attribute must reference to all member DN's of the given group
  member_attribute: member

# Connection parameters to PostgreSQL server
# see also: http://rubydoc.info/gems/pg/PG/Connection#initialize-instance_method
pg_connection:
  host: 192.168.22.99
  dbname: postgres
  user: postgres
  password: postgres

pg_users:
  # Filter for identifying LDAP generated users in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_users')
  # Options for CREATE RULE statements
  create_options: LOGIN IN ROLE ldap_users

pg_groups:
  # Filter for identifying LDAP generated groups in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_groups')
  # Options for CREATE RULE statements
  create_options: NOLOGIN IN ROLE ldap_groups
#grant_options:

When you have a look at the “pg_users” and “pg_groups” you will notice why the two PostgreSQL roles created above are required. They are used to distinguish the users and groups coming from the directory and those created locally.

Ready to sync:

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:23:46.350588 #29270]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.360073 #29270]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.363133 #29270]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.474105 #29270]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.517468 #29270]  INFO -- : user stat: create: 3 drop: 0 keep: 0
I, [2018-07-23T14:23:46.517798 #29270]  INFO -- : group stat: create: 1 drop: 0 keep: 0
I, [2018-07-23T14:23:46.518047 #29270]  INFO -- : membership stat: grant: 2 revoke: 0 keep: 0
I, [2018-07-23T14:23:46.518201 #29270]  INFO -- : SQL: CREATE ROLE "dba1" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.522229 #29270]  INFO -- : SQL: CREATE ROLE "dba2" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.525156 #29270]  INFO -- : SQL: CREATE ROLE "dba3" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.528058 #29270]  INFO -- : SQL: CREATE ROLE "dbas" NOLOGIN IN ROLE ldap_groups
I, [2018-07-23T14:23:46.531065 #29270]  INFO -- : SQL: GRANT "dbas" TO "dba3","dba1" 

… and that’s it. Users and groups are now available in PostgreSQL:

postgres=# \du
                                        List of roles
  Role name  |                         Attributes                         |     Member of     
-------------+------------------------------------------------------------+-------------------
 dba1        |                                                            | {ldap_users,dbas}
 dba2        |                                                            | {ldap_users}
 dba3        |                                                            | {ldap_users,dbas}
 dbas        | Cannot login                                               | {ldap_groups}
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

When you add anther user to the directory:

Selection_007

… and run the sync again all remaining users will of course not be touched but the new one gets created (notice that I copied the dba4 in the directory, this is why the user is member of the dbas group):

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:27:26.314729 #29273]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.323719 #29273]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.326764 #29273]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.328800 #29273]  INFO -- : found user-dn: CN=dba4,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.394066 #29273]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.434236 #29273]  INFO -- : found pg-user: "dba1"
I, [2018-07-23T14:27:26.434443 #29273]  INFO -- : found pg-user: "dba2"
I, [2018-07-23T14:27:26.434531 #29273]  INFO -- : found pg-user: "dba3"
I, [2018-07-23T14:27:26.439065 #29273]  INFO -- : found pg-group: "dbas" with members: ["dba3", "dba1"]
I, [2018-07-23T14:27:26.439357 #29273]  INFO -- : user stat: create: 1 drop: 0 keep: 3
I, [2018-07-23T14:27:26.439468 #29273]  INFO -- : group stat: create: 0 drop: 0 keep: 1
I, [2018-07-23T14:27:26.439656 #29273]  INFO -- : membership stat: grant: 1 revoke: 0 keep: 2
I, [2018-07-23T14:27:26.439759 #29273]  INFO -- : SQL: CREATE ROLE "dba4" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:27:26.441692 #29273]  INFO -- : SQL: GRANT "dbas" TO "dba4" 

To more tips: When you want the complete ldap path for a user can do it like this:
Selection_008

It is advisable to test the filters you have in the yaml like:

[postgres@pgadsync ~]$ ldapsearch -x -h 172.22.30.1 -D "pgadsync@test.dbiservices.com" -W "(sAMAccountName=*)" -b "OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com"  | grep sAMAccountName
Enter LDAP Password: 
# filter: (sAMAccountName=*)
sAMAccountName: dba1
sAMAccountName: dba2
sAMAccountName: dba3
sAMAccountName: dba4

You might wonder how you can assign the permissions then. Just pre-create the role and give the permissions you want:

postgres=# drop role dbas;
DROP ROLE
postgres=# create role dbas in role ldap_groups;
CREATE ROLE
postgres=# grant CONNECT ON DATABASE postgres to dbas;
GRANT

The assignments to that group will come from the directory once you run the next synchronization.

Hope that helps …

 

Cet article Syncing Active Directory users and groups to PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Analytics Cloud Workshop FAQ

Rittman Mead Consulting - Mon, 2018-07-23 07:59

A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.

If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to training@rittmanmead.com and we can give you the details.

Do Oracle provide a feature comparison list between OBIEE on premise and OAC?

Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:

High Level
  • Enterprise Analytics is identical to 12c Analytics
  • Only two Actions available in OAC: Navigate to BI content, Navigate to Web page
  • BI Publisher is identical in 12c and OAC
  • Data Visualiser has additional features and a slightly different UI in OAC compared to 12c
BI Developer Client Tool for OAC
  • Looks exactly the same as the OBIEE client
  • Available only for Windows, straightforward installation
  • OAC IP address and BI Server port must be provided to create an ODBC data source
  • Allows to open and edit online the OAC model
  • Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)
Data Modeler
  • Alternative tool to create and manage metadata models
  • Very easy to use, but limited compared to the BI Developer Client.
Catalog
  • It's possible to archive/unarchive catalog folders from on-premise to OAC.
BAR file
  • It's possible to create OAC bar files
  • It's possible to migrate OAC bar files to OBIEE 12c
Can you ever be charged by network usage, for example connection to an on premise data source using RDC?

Oracle will not charge you for network usage as things stand. Your charges come from the following:

  • Which version of OAC you have (Standard, Data Lake or Enterprise)
  • Whether you are using Pay-as-you-go or Monthly Commitments
  • The amount of disk space you have specified during provisioning
  • The combination of OCPU and RAM currently in use (size).
  • The up-time of your environment.

So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.

Do you need to restart the OAC environment when you change the RAM and OCPU settings?

Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:

alt

PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered here, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.

This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:

  • oc3 — 1 OCPU with 7.5 GB RAM
  • oc4 — 2 OCPUs with 15 GB RAM
  • oc5 — 4 OCPUs with 30 GB RAM
  • oc6 — 8 OCPUs with 60 GB RAM
  • oc7 — 16 OCPUs with 120 GB RAM
  • oc8 — 24 OCPUs with 180 GB RAM
  • oc9 — 32 OCPUs with 240 GB RAM

For example:

The following example scales the rittmanmead-analytics-prod service to the oc9 shape.

$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json
where the JSON file contains the following:

{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }

Oracle supply documentation for the commands required here: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html .

How is high availability provisioned in Oracle Analytics Cloud?

Building a high available infrastructure in the cloud needs to take into consideration three main areas:

Server Failure: Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:

alt

It is also possible to provision a load balancer, as you can see from the screenshot below:

alt

Zone Failure: Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.

Cloud Failure: Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment.

In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.

Can you push the user credentials down to the database?

At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html

You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script configure_bi_sql_group_provider to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.

Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.

alt

These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.
The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure_bi_sql_group_provider

Can you implement SSL certificates in Oracle Analytics Cloud?

The short answer is yes.

When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. Doc ID 2334800.1 on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:

  • Associate a custom domain name against the public ip of your OAC instance
  • Get the custom SSL certificate from a Certificate Authority
  • Specify the DNS registered host name that you want to secure with SSL in servername.conf
  • Install Intermediate certificateRun the script to Register the new private key and server certificate
Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?

Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:

alt

Oracle support Doc ID 2399789.1 covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.

Are RPD files (BAR files) backwards compatible?

This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot

How do you connect securely to AWS?

There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:

alt

Can you still edit instanceconfig and nqsconfig files?

Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI

Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.

Who is responsible for what regarding support?

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (Doc ID 2309936.1)

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services

Categories: BI & Warehousing

Oracle Analytics Cloud Workshop FAQ

Rittman Mead Consulting - Mon, 2018-07-23 07:59

A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.

If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to training@rittmanmead.com and we can give you the details.

Do Oracle provide a feature comparison list between OBIEE on premise and OAC?

Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:

High Level
  • Enterprise Analytics is identical to 12c Analytics
  • Only two Actions available in OAC: Navigate to BI content, Navigate to Web page
  • BI Publisher is identical in 12c and OAC
  • Data Visualiser has additional features and a slightly different UI in OAC compared to 12c
BI Developer Client Tool for OAC
  • Looks exactly the same as the OBIEE client
  • Available only for Windows, straightforward installation
  • OAC IP address and BI Server port must be provided to create an ODBC data source
  • Allows to open and edit online the OAC model
  • Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)
Data Modeler
  • Alternative tool to create and manage metadata models
  • Very easy to use, but limited compared to the BI Developer Client.
Catalog
  • It's possible to archive/unarchive catalog folders from on-premise to OAC.
BAR file
  • It's possible to create OAC bar files
  • It's possible to migrate OAC bar files to OBIEE 12c
Can you ever be charged by network usage, for example connection to an on premise data source using RDC?

Oracle will not charge you for network usage as things stand. Your charges come from the following:

  • Which version of OAC you have (Standard, Data Lake or Enterprise)
  • Whether you are using Pay-as-you-go or Monthly Commitments
  • The amount of disk space you have specified during provisioning
  • The combination of OCPU and RAM currently in use (size).
  • The up-time of your environment.

So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.

Do you need to restart the OAC environment when you change the RAM and OCPU settings?

Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:

alt

PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered here, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.

This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:

  • oc3 — 1 OCPU with 7.5 GB RAM
  • oc4 — 2 OCPUs with 15 GB RAM
  • oc5 — 4 OCPUs with 30 GB RAM
  • oc6 — 8 OCPUs with 60 GB RAM
  • oc7 — 16 OCPUs with 120 GB RAM
  • oc8 — 24 OCPUs with 180 GB RAM
  • oc9 — 32 OCPUs with 240 GB RAM

For example:

The following example scales the rittmanmead-analytics-prod service to the oc9 shape.

$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json where the JSON file contains the following:

{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }

Oracle supply documentation for the commands required here: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html .

How is high availability provisioned in Oracle Analytics Cloud?

Building a high available infrastructure in the cloud needs to take into consideration three main areas:

Server Failure: Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:

alt

It is also possible to provision a load balancer, as you can see from the screenshot below: 

alt

Zone Failure: Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.

Cloud Failure: Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment. 

In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.

Can you push the user credentials down to the database?

At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html

You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script configure_bi_sql_group_provider to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.

Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.

alt

These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.
The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configurebisqlgroupprovider

Can you implement SSL certificates in Oracle Analytics Cloud?

The short answer is yes.

When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. Doc ID 2334800.1 on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:

  • Associate a custom domain name against the public ip of your OAC instance
  • Get the custom SSL certificate from a Certificate Authority
  • Specify the DNS registered host name that you want to secure with SSL in servername.conf
  • Install Intermediate certificateRun the script to Register the new private key and server certificate
Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?

Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:

alt

Oracle support Doc ID 2399789.1 covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.

Are RPD files (BAR files) backwards compatible?

This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot

How do you connect securely to AWS?

There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:

alt

Can you still edit instanceconfig and nqsconfig files?

Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml

/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI

Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.

Who is responsible for what regarding support?

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (Doc ID 2309936.1)

Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services

Categories: BI & Warehousing

Announcement: Webinars for “Oracle Indexing Internals and Best Practices” Now Confirmed !!

Richard Foote - Mon, 2018-07-23 03:28
Exciting News !! I can now confirm the dates for my first webinars of my fully updated and highly acclaimed “Oracle Indexing Internals and Best Practice” seminar. For details of all the extensive content covered in the webinars, please visit my Indexing Seminar page. The webinars will run for 4 hours each day, spanning a full week period […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator