Feed aggregator

Date Format Handling in Oracle JET

Andrejus Baranovski - Tue, 2018-12-11 14:58
Oracle JET comes with out of the box support for date converter, check more about it in cookbook - Date Converter. This makes it very handy to format dates in JavaScript. Here is date picker field example with yyyy-MM-dd format applied:


When button Process is pressed, I take date value from date picker and add one day - result is printed in the log. This is just to test simple date operation in JavaScript.

Date picker is defined by JET tag. Format is assigned through converter property:


Current date is displayed from observable variable. This variable is initialized from current date converted to local ISO. Converter is configured with pattern. In the JS method, where tomorrow date is calculated - make sure to convert from ISO local date:


Hope this simple example helps you to work with dates in Oracle JET application. Source code is available on my GitHub directory.

Announcing Oracle Functions

OTN TechBlog - Tue, 2018-12-11 12:57

Photo by Tim Easley on Unsplash

[First posted on the Oracle Cloud Infrastructure Blog]

At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to build and run serverless applications in the cloud. 

Oracle Functions is a serverless platform that makes it easy for developers to write and deploy code without having to worry about provisioning or managing compute and network infrastructure. Oracle Functions manages all the underlying infrastructure automatically and scales it elastically to service incoming requests.  Developers can focus on writing code that delivers business value.

Pay-per-use

Serverless functions change the economic model of cloud computing as customers are only charged for the resources used while a function is running.  There’s no charge for idle time! This is unlike the traditional approach of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle.  Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns. 

Open Source

Open source has changed the way businesses build software and the same is true for Oracle. Rather than building yet another proprietary cloud functions platform, Oracle chose to invest in the Apache 2.0 licensed open source Fn Project and build Oracle Functions on Fn. With this approach, code written for Oracle Functions will run on any Fn server.  Functions can be deployed to Oracle Functions or to a customer managed Fn cluster on-prem or even on another cloud platform.  That said, the advantage of Oracle Functions is that it’s a serverless offering which eliminates the need for customers to manually manage an Fn cluster or the underlying compute infrastructure. But thanks to open source Fn, customers will always have the choice to deploy their functions to whatever platform offers the best price and performance. We’re confident that platform will be Oracle Functions.

Container Native

Unlike most other functions platforms, Oracle Functions is container native with functions packaged as Docker container images.  This approach supports a highly productive developer experience for new users while allowing power users to fully customize their function runtime environment, including installing any required native libraries.  The broad Docker ecosystem and the flexibility it offers lets developers focus on solving business problems and not on figuring out how to hack around restrictions frequently encountered on proprietary cloud function platforms. 

As functions are deployed as Docker containers, Oracle Functions is seamlessly integrated with the Docker Registry v2 compliant Oracle Cloud Infrastructure Registry (OCIR) which is used to store function container images.  Like Oracle Functions, OCIR is also both serverless and pay-per-use.  You simply build a function and push the container images to OCIR which charges just for the resources used.

Secure

Security is the top priority for Oracle Cloud services and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM) which allows both function management and function invocation privileges to be assigned to specific users and user groups.  And once deployed, functions themselves may only access resources on VCNs in their compartment that they have been explicitly granted access to.  Secure access is also the default for function container images stored in OCIR.  Oracle Functions works with OCIR private registries to ensure that only authorized users are able to access and deploy function containers.  In each of these cases, Oracle Function takes a “secure by default” approach while providing customers full control over their function assets.  

Getting Started

Oracle Functions will be generally available in 2019 but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please let us know by registering with this form.  You can also learn more about the underlying open source technology used in Oracle Function at FnProject.io.

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

OTN TechBlog - Tue, 2018-12-11 12:00

This blog was originally published at https://blogs.oracle.com/cloudnative/

At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project.

With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure

Oracle Cloud Native Framework – What is It?

The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services.

Cloud Native at a Crossroads – Amazing Progress

We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential:

  • Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt).

  • Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs.

  • Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more. 

Cloud Native at a Crossroads – Critical Challenges Ahead

Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide.

The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey:

  • Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard.

  • Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon?

  • Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions.

The Cloud Native Second Wave – Inclusive, Sustainable, Open

What’s needed is a different approach:

  • Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises

  • Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure

  • Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions

Introducing the Oracle Cloud Native Framework – What’s New?

The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.

 

  • Application Definition and Development

    • Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time.

    • Streaming: Enables applications such as supply chain, security, and IoT to collect from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data.

  • Provisioning

    • Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively.

  • Observation and Analysis

    • Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions.

    • Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern.

    • Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services.

Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open

Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management. 

As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native.

Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.

 

Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast.

A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve.

A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases:

  1. Development and DevOps: Dev/test in the cloud, production on-prem

 

 

  1. Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.

 

 

  1. HA/DR: Disaster recovery or high availability sites in cloud, production on-prem

  1. Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy)

  1. Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation

 

  • Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.

 

An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

Oracle Arms Developers with the Most Comprehensive Cloud Native Framework

Oracle Press Releases - Tue, 2018-12-11 10:00
Press Release
Oracle Arms Developers with the Most Comprehensive Cloud Native Framework Oracle Cloud Native Framework provides deployment models for Oracle Cloud Infrastructure, hybrid cloud and on-premises users

KubeCon, Seattle, Was.—Dec 11, 2018

Oracle today announced the Oracle Cloud Native Framework, providing developers a cloud native solution that spans public cloud, on premises and hybrid cloud deployments. Capitalizing on Oracle Cloud Infrastructure and the recently announced Oracle Linux Cloud Native Environment, the Oracle Cloud Native Framework introduces a rich set of cloud native managed services and on-premises software. The Oracle Cloud Native Framework also introduces Oracle Functions, a new breakthrough serverless cloud service based on the open source Fn Project.

As organizations move to the cloud, they are facing new and difficult challenges addressing cultural change and increased complexity. DevOps and cloud native tooling have left many developers and projects behind the curve. Moreover, organizations eager to use standard open source components and leverage cloud capabilities, but are impeded by the number of complex choices, lack of training and fear of cloud vendor lock-in. By providing cloud native capabilities and offerings regardless of the deployment scenario and leveraging open standards established by the Cloud Native Computing Foundation (CNCF), Oracle, a platinum member of CNCF, is uniquely providing its customers with choice while meeting the broad deployment needs of developers.

“With the growing popularity of the CNCF as a unifying and organizing force in the cloud native ecosystem and organizations increasingly embracing multi cloud and hybrid cloud models, developers should have the flexibility to build and deploy their applications anywhere they choose without the threat of cloud vendor lock-in. Oracle is making this a reality,” said Don Johnson, executive vice president, product development, Oracle Cloud Infrastructure.

To further enable developers to build and deploy modern applications, Oracle is introducing a rich set of first class Oracle Cloud Infrastructure services built on Oracle’s Gen 2 Cloud IaaS and existing foundational Kubernetes orchestration and management layer, the Oracle Container Engine for Kubernetes (OKE). These new offerings are focused on three critical Oracle Cloud Infrastructure cloud native layers:

  • Application Definition & Development

    • Oracle Functions: Scalable, multi-tenant serverless functions that let users focus on writing code to meet business needs without having to know about any infrastructure concepts. Users only pay when a function is invoked, but pay nothing when the code is not running.
    • Streaming: A highly available, scalable and multi-tenant streaming platform that makes it easy to collect and manage streaming data. Streaming enables applications such as IoT, security and supply chain, where large amounts of data are collected from many sources and need to be processed in real time.
  • Provisioning

    • Resource Manager: A managed service that can provision all Oracle Cloud Infrastructure resources and services. Resource Manager reduces configuration errors and increases productivity by managing infrastructure declaratively (i.e. “infrastructure as code”) using industry standard Terraform.
  • Observability & Analysis

    • Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Use predefined metrics and dashboards, or use the service API to obtain a wholistic view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and take action when they vary or exceed defined thresholds. Alarms provide a critical service to help users meet service level objectives and avoid interruptions.
    • Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. With the notification service, users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern.
    • Events: An offering that enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services.

With the announcement of Oracle Functions, a new breakthrough serverless solution based on the open source Fn Project, developers can easily deploy and execute function-based applications without the need to manage compute infrastructure. Oracle Functions is Docker container-based and completely pay-per-use, so charges are incurred only when functions are run. The underlying Fn Project, which Oracle Functions offers as a fully-managed service, can run on-premises, in a data center, or on any cloud. The Oracle Cloud Native Framework supports both modern (cloud native) and traditional (WebLogic, Java, and database) applications. The enterprise grade platform can be leveraged for sustainable strategy, managed services, and curated OSS.

  • "Kubernetes and containers are enabling CERN to improve the deployment of critical Java Applications to reduce maintenance tasks and focus more on developers' needs, including improving their experience with Oracle WebLogic Server. In this context, Oracle has provided us with an open source tool that has helped us decrease considerably the time needed to configure WebLogic. We are also working to take advantage of Oracle Cloud Infrastructure and its Kubernetes offering to replace our disaster recovery solution for our databases and WebLogic," said Antonio Nappi, DevOps Engineer for CERN.
     
    “Our business relies on delivering innovative software solutions to a global market. Oracle Container Engine for Kubernetes on Oracle Cloud Infrastructure has helped us not only converge data centers and add scale, but also significantly improve performance and security as we adopt a microservices architecture. It’s actually been the simplest part of our migration,” said Jason Looney, vice president of Enterprise Architecture, Beeline.
     
    “Sauce is a video collaboration and creation platform for teams, empowering brands to create authentic video content. Oracle Container Engine for Kubernetes on Oracle Cloud Infrastructure has helped us scale our cloud technology, supporting our global growth while increasing reliability and performance, especially in the areas of video transcoding and creation. Getting started on Oracle Container Engine for Kubernetes was a breeze and now our team can develop and deploy faster than ever,” said Jon Girven, co-founder and CTO of Sauce Video. 
     
    “Gapsquare is a cloud-based SaaS solution that helps large companies maintain continuous fair pay by closing the gender pay gap, ethnicity pay gap, and any equal pay issues. By moving to the Oracle Cloud completely, we have been able to reduce time spent on DevOps and focus on implementing new features. We found Oracle Container Engine for Kubernetes to be the easiest way to reliably deploy our application on a scalable and distributed environment, while continuing to deliver new automatic features that require no maintenance. This has allowed us to speed time to market for new offerings and rapidly expand globally,” said Zara Nanu, CEO of Gapsquare, and a member of Oracle's global startup program.
Contact Info
Danielle Tarp
Oracle
+1.650.506.2905
danielle.tarp@oracle.com
Quentin Nolibois
Burson Cohn & Wolfe
+1 415.591.4097
quentin.nolibois@bm.com
About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is an enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure can dramatically increase operational efficiency and lower total cost of ownership. For more information, visit https://cloud.oracle.com/iaas

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2905

Quentin Nolibois

  • +1 415.591.4097

[BLOG] Oracle Cloud Infrastructure Architect Associate (1Z0-932) | DbaasCli Utility

Online Apps DBA - Tue, 2018-12-11 06:02

Are you preparing for 1Z0-932 Certification and want to clear your concepts on some important topics that are covered in the 1z0-932 exam? If yes, then visit: https://k21academy.com/oci24 and learn about the capabilities of DBaaSCli Utility in detail with the help of Q&A & much more… Are you preparing for 1Z0-932 Certification and want to […]

The post [BLOG] Oracle Cloud Infrastructure Architect Associate (1Z0-932) | DbaasCli Utility appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Accelerate your #BI Performance with #Exasol

The Oracle Instructor - Tue, 2018-12-11 02:15

Your BI users complain about slow performance of their analytical queries? Is this your Status Quo?

tableau was taken as a popular example for AdHoc analytics but it might be any of the others like MicroStrategy, Looker, you name it. The good news is that this problem can be solved quite easily and without having to spend a fortune trying to speed up your legacy DWH to keep up with the BI demands:

Exasol High Performance Sidecar

Using Exasol as a High Performance Sidecar to take away the pain from your BI users is the easy and fast cure for your problem! This is actually the most common way how Exasol arrives at companies. More often than not this may lead to a complete replacement of the legacy DWH by Exasol:

Exasol replaces legacy DWH

That’s what adidas, Otto and Zalando did, to name a few of our customers.

Don’t take our word for it, we are more than happy to do a PoC!

Categories: DBA Blogs

Transparent Data Encryption – Certificate in master database: is it really a good practice?

Yann Neuhaus - Tue, 2018-12-11 02:00

As encryption solution in SQL Server, Transparent Data Encryption (TDE) is simple and quick to set up. That’s why this is a common encryption mechanism.

TDE encrypts data with a certificate at the page level, before SQL Server writes on the disk. It is supposed to protect your environment from some scenarios, where SQL Server files (backups or data) are stolen.
By default the certificate used for encryption is stored in the master database. But is it really a good practice?

Let’s see with a common scenario:
– One MSSQL instance where TDE is enabled for one database
– One MSSQL instance without any certificate

Configuring TDE for a database

Create a master key:

USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@$$w0rd'
GO

Create a certificate to use for TDE:

USE master
GO
CREATE CERTIFICATE CertinMaster
   WITH SUBJECT = 'Self-Signed Certificate in Master',   
   EXPIRY_DATE = '20241231';  
GO  

Certificate

Create a database encryption key in the database you want to encrypt, protected by the certificate:

USE dummy
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE CertinMaster
GO

Enable encryption on the database:

USE master
GO
ALTER DATABASE dummy SET ENCRYPTION ON
GO

tde

At this step:
– TDE is configured for dummy database
– Certificate used for encryption is located in the master database

Restoring encrypted database to an another instance – first try

After backing up dummy database and copying the backup to another MSSQL instance, an error is displayed when trying to restore this database with required certificate:

Restore_tde_witout_certificate_impossible

This example shows how TDE protects data from a scenario where someone has robbed your backup file.
A similar error appears if an attach file procedure is used.

Restoring encrypted database to an another instance – second try

After backing up master database, let’s try to restore it on the other MSSQL environment.
Restoring a master database is a bit more complicated than a user database, but not impossible.

First it is mandatory to set the instance in Single User mode, by adding Trace Flag ‘-m’ at startup parameters.
PowerShell code executed on target server:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
New-ItemProperty -Path $Path -PropertyType String -Name SQLArg3 -Value "-m"

Note: depending on how many startup parameters you have (default is 3), adjust accordingly the item property to SQLArgX.

A restart of the instance is necessary to apply the Trace Flag. Do not start the agent service, otherwise it will connect to the instance in single user mode.

As only one connection to the instance is authorized in this mode, PowerShell is more suited to do the job for the restore:

$Query= "RESTORE DATABASE master FROM DISK = 'C:\Mountpoints\Backup\master.bak' WITH REPLACE";
Invoke-Sqlcmd -ServerInstance KERRIGAN\ARCTURUS -Query $Query"

The SQL Server is automatically stopped.

First remove the Single user mode:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
Remove-ItemProperty -Path $Path -Name SQLArg3"

After disabling Single User mode, SQL Server services (Engine + Agent) can start again.

After query the new master, it is effectively holding the certificate:
Certificate_Copied

Finally after trying to restore the dummy database on this instance, it works without any trouble: data are freely accessible.

Conclusion

For a complete protection, I do not recommend to use a simple form of Transparent Data Encryption. As a best practice, I do recommend to store the encryption key on a third-party system using Extensible Key Management (EKM).

 

Cet article Transparent Data Encryption – Certificate in master database: is it really a good practice? est apparu en premier sur Blog dbi services.

Overview of Automatic Database Diagnostic Monitor (ADDM)

Online Apps DBA - Tue, 2018-12-11 01:12

ADDM is Must Know Tool when doing Performance Tuning! Do You Know about the tool that analyzes data in the Automatic Workload Repository (AWR) performing accurate and timely diagnosis of the problem? Automatic Database Diagnostic Monitor (ADDM) locates the root cause and provides recommendations for correcting the problem Read the Blog At https://k21academy.com/tuning15 to know […]

The post Overview of Automatic Database Diagnostic Monitor (ADDM) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

PHP OCI8 2.2.0 has database call timeouts

Christopher Jones - Mon, 2018-12-10 22:55

Hot on the heels of PHP 7.3 being released, I've just pushed PHP OCI8 2.2.0 to PECL.

This release will install on PHP 7.0 to 7.3

One new feature available when OCI8 is compiled with Oracle Client 18c libraries is a database call timeout. A new oci_set_call_timeout() function is available. This will interrupt long running calls to the database, allowing better control over application behavior.

Install with 'pecl install oci8'. Note that currently the Windows builds are not using 18c, so do not have oci_set_call_timeout(). (Yet another reason to move to Linux!)

The OCI8 2.2 code line will also be included in future PHP7.2.14 and PHP 7.3.1 full software releases.

Looking for documentation? As well as php.net/oci8, there is still a lot of useful information in The Underground PHP and Oracle Manual.

Connecting to Oracle Autonomous Transaction Processing (ATP) from Developer Cloud Service

Shay Shmeltzer - Mon, 2018-12-10 18:10

The latest and greatest flavor of the Oracle Database in the cloud is Oracle Autonomous Transaction Processing (ATP). One of the Autonomous DB flavors that, it is optimized for OLTP (On-Line Transaction Processing) Applications - the type that you and I usually work on.

One new feature in the world of ATP is the way that you connect to the DB, connection is done leveraging wallets to make sure that your data is secured even though you are connecting over public internet. Here are instructions on how to get such a wallet file for your instance of ATP.

We introduced an enhancement to the latest version of Developer Cloud Service that allows you to connect to ATP from your CI/CD automation jobs. This can help you automate CI/CD for SQL scripts that you need to run against that DB.

As I mentioned in past blogs, DevCS has built in support for the SQLcl utility, allowing you to run SQL scripts against an Oracle database as part of your CI/CD chain. If you want to connect the SQLcl utility in DevCS to ATP, it will need to have access to your wallet.zip file. You can achieve this by uploading the file into your git repository.

Then in your SQLcl configuration you'll specify the user/pass like before, and then point the field titled Credentials File to the wallet.zip file location. (in the screenshot below, the zip file is at the top of the git repo connected to the build - so there is no need to add a path). In the next field, titled connection string, you specify the name used in the wallet's tnsnames.ora file to connect to the DB.

Now you can continue as usual and provide inline SQL or point to SQL files from your git repository.

ATP Connection Definition

 

Categories: Development

Foglight

Yann Neuhaus - Mon, 2018-12-10 11:37

As many Oracle Database Administrators, I am a regular user of Oracle Enterprise Manager in order to monitor or administer my client’s databases. I am lucky because most of them have the diagnostic pack and the tuning pack :=).

But we always have to be curious and discover new products. As described by my colleague Nicolas Penot, I installed Foglight:

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-01/

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-02/

The installation was successful and very quick to realize, remember when you have to install OEM 13c, if you well know the product and have a powerful server, if you can install in half a day , you are pretty efficient.

I successfully discovered two Oracle databases (version 12.2.0.1 and 18.3.0.0). As you can see in the picture below, you can also monitor MySQL , Postgres, DB2 , and SQL Server.

fg1

 

The first summary database page shows a lot of useful information (process activity, memory activity, logical reads … a.s.o:

fg2

 

fg3

We have the possibility to navigate in different menus for example the Storage Tablespace Summary:

fg4

We have the possibility to display the datafiles I/O summary:

fg5

 

We can display the pluggable databases and to have an overview of workload metrics with nice graphs:

fg6

 

fg7

 

Foglight also displays the alert.log file error messages with four categories (Informational, Critical Warning or Fatal):

We create false critical messages in the alert.log file, I used dbms_system.ksdwrt() to generate false ORA-00600 or ORA_04031 error messages:

fg8

 

You can visualize , enable or disable the alarms, edit and modify the collections, the configuration is intuitive:

fg9

The report menu is also interesting, many templates for different databases are defined:

fg10

It is very difficult to make a choice between Foglight and OEM. My first opinion (but I did not test Foglight in an active production environment) is that Foglight is a monitoring solution for many types of databases (MySQL, Oracle, Postgres, SQL Server, DB2), thus OEM is a monitoring and administration tool for Oracle databases.

Effectively you can add SQL server or MySQL plugin , Medora plugin for Postgres in OEM  in order to administer SQL Server, MySQL or Postgres databases, but you will never reach the powerful functionalities offered by OEM to monitor or administer Oracle databases.

The big advantage is their threshold, notification functionality and the reporting facilities which both OEM and Foglight have.  I did not test a lot SQL PI in Foglight, and I’m asking myself if like Oracle we have the possibility to have an historical graph of what happened the week before, if we have the possibility to display the execution plan and of course to run the SQL tuning advisor to offer another execution plan for example ?

The time dedicated to administer Foglight seems to be equivalent to OEM. You cannot install those enterprise products and let them live, like OEM Foglight seems to be a complex system and requires a significant time to run effectively.

If you do not have a lot of databases and not a big budget, you can use simple SQL scripts to keep you alerted. You have several different types of database to monitor, it might be a good idea to test and use Foglight. If most of your databases are Oracle based and you have paid for the tuning and diagnostic pack, use OEM to fully benefit of the administration and monitoring console.

 

Cet article Foglight est apparu en premier sur Blog dbi services.

Case Study

Jonathan Lewis - Mon, 2018-12-10 07:10

A recent thread on the ODC database forum highlighted a case where the optimizer was estimating 83,000 for a particular index full scan when the SQL Monitor output for the operation showed that it was returning 11,000,000 rows.

Apart from the minor detail that the OP didn’t specifically ask a question, the information supplied was pretty good. The OP had given us a list of bind variables, with values, and the SQL statement, followed by the text output of the Monitor’ed SQL and, to get the predicate section of the plan, the output from a call to dbms_xplan. This was followed by the DDL for the critical index and a list of the stats for all the columns in the index.

Here’s the critical line of the plan (from the SQL Monitor report) followed by its predicate section (from the dbms_xplan output, but cosmetically enhanced) and some details of the columns used in the predicate:

SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
=================================================================================================================================================================================================================================

  11 - filter(
        (    TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)>=ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)) 
         AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)<=TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10)
        )

COLUMN_NAME                    DATA_TYPE       NUM_DISTINCT  DENSITY  NUM_NULLS LAST_ANALYZED       HISTOGRAM
------------------------------ --------------- ------------ -------- ---------- ------------------- ---------------
YEAR                           NUMBER                     5        0          0 2018-12-02 13:19:10 FREQUENCY
MONTH                          NUMBER                    12        0          0 2018-12-02 13:19:10 FREQUENCY

I’ve included the full Monitor output at the end of the posting, or you could visit the ODC page if you want to see it, but if we look at just this line we can see that the index full scan starts running in the first second of the query (‘Start Active’), runs once (‘Execs’) and, as the OP said, retrieved 11M rows in that one scan compared to an estimated 83,917.

When we examine the predicate section we can understand why the optimizer could make such a large error – the SQL requires Oracle to combine two columns from the table with various bits of bind variables to construct a date which is then compares with a couple of constant dates derived from several input bind variables using range based comparisons.

This is an example of Oracle using a fixed estimate of 5% for the selectivity of “unknown range-based comparison” – but with two comparisons the selectivity becomes 5% of 5% = 0.25% (i.e. 1/400).

If we look at the column definitions and stats we see that we seem to have 5 possible years and 12 possible months (which could mean a range as small as 3 years and 2 months) – so a selectivity of 1/400 would be in the right ballpark if we were querying for a date range of roughly 4.5 days. Working the figures the other way around – if 83,917 is 1/400 of the data then there are about 33.5M rows in the table and we are querying for something more like 1/3 of the table.

Observations

I find it curious that the optimizer used an “index full scan” to fetch a huge amount of data from the index when there is no requirement for sorting (there is a subsequent “hash unique”, rather than “sort unique nosort”). I would have expected an “index fast full scan” so I am curious to know if some optimizer parameters have been fiddled with to get the optimizer to bypass the fast full scan. Possibly a change in parameter settings would result in a very different plan.

The names of the bind variables are of the form “SYS_B_nn” – which means that the original query has been subject to the effects of forced cursor sharing. Since we are apparently expecting to identify and manipulate millions of rows this looks like the type of query where you don’t want to use cursor sharing. If the session can set “cursor_sharing=exact” before running the query, or inject the hint /*+ cursor_sharing_exact */ into the query then perhaps we’d get a better estimate of rows (and a better plan). If hinting or setting session parameters is possible then setting optimzer_dynamic_sampling to level 3, or possibly 4, might be sufficient.

The messy expression combining month and year is a crippling handicap to the optimizer – so fixing the query to make the literals visible isn’t actually going to help. This is Oracle 12c, though – so we could add a virtual date column (declared as invisible to avoid the threat of inserts that don’t specify column lists) and gather stats on it. The combination of virtual column and literal values might give the optimizer the information it really needs. Here’s a little script to demonstrate:


rem
rem     Script:         virtual_study.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.2

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        sysdate - (5 * 365) + rownum / 550      d1,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'MM'
                )
        )                                       month,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'YYYY'
                )
        )                                       year,
        lpad(rownum,10,'0')                     v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns month size 12 for columns year size 6'
        );
end;
/

I’ve created a table with a million rows with data going back roughly 5 years from current date, which means I need roughly 550 rows per day. I’ve then created histograms on the month and year columns to match the original posting. Now I’ll set up the bind variables and values specified by the OP and run a simple query to show the date information that the bind variables give, and the 1/400 selectivity of the OP’s predicate:


var SYS_B_00 varchar2(32);
var SYS_B_01 varchar2(32);
var SYS_B_02 varchar2(32);
var SYS_B_03 varchar2(32);
var SYS_B_04 varchar2(32);
var SYS_B_05 varchar2(32);
var SYS_B_06 number;
var SYS_B_07 varchar2(32);
var SYS_B_08 varchar2(32);
var SYS_B_09 varchar2(32);
var SYS_B_10 number;

exec :SYS_B_00:='01/';
exec :SYS_B_01:='/';
exec :SYS_B_02:='dd/MM/yyyy';
exec :SYS_B_03:='10/04/2018';
exec :SYS_B_04:='MM/dd/yyyy';
exec :SYS_B_05:='q';
exec :SYS_B_06:=12;
exec :SYS_B_07:='10/04/2018';
exec :SYS_B_08:='MM/dd/yyyy';
exec :SYS_B_09:='q';
exec :SYS_B_10:=1;

select
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d1, 
        add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06))  c1,
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d2,
        trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10                 c2
from
        t1
where
        rownum = 1
;

set serveroutput off
alter session set statistics_level = all;

select  count(*)
from    t1
where
        (    to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) >= add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06)) 
         and to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) <= trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

===========================================

D1        C1        D2        C2
--------- --------- --------- ---------
01-DEC-13 01-OCT-17 01-DEC-13 30-SEP-18


  COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:07.39 |    4980 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:07.39 |    4980 |
|*  2 |   FILTER            |      |      1 |        |    200K|00:00:06.42 |    4980 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |   2500 |    200K|00:00:04.59 |    4980 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10 .ge. ADD_MON
              THS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)))
   3 - filter((TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR")
              ,:SYS_B_02) .ge. ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B
              _06)) AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS
              _B_02) .le. TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10))


Note: in this and subsequent text I’ve had to use .le. to represent “less than or equal to” and .ge. to represent “greater than or equal to”. in the execution plans

This shows us that the first row in my table has a date component of 1st Dec 2013, while the date range required by the OP was one year’s worth of data between 1st Oct 2017 and 30th Sept 2018. The optimizer’s estimate of 2,500 rows out of 1M is the 1/400 we expect.

Let’s test the effect of running the query using literals (i.e. in the OP’s environment stop the “cursor_sharing = force” effect):


select
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'))

========================================================

 COUNT(*)
----------
    200750


--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   892 (100)|      1 |00:00:05.17 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:05.17 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |   2500 |   892  (30)|    200K|00:00:04.30 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))


We can see that the literals have echoed through the plan to the predicate section, but the optimizer hasn’t changed its estimate. Let’s create the virtual column, gather stats on it, and try again:


alter table t1 add v_date invisible generated always as (
        to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy')
) virtual
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns v_date size 1')

select  /* virtual column */
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

 select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

=======================================================================

 COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   950 (100)|      1 |00:00:06.27 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:06.27 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |    236K|   950  (34)|    200K|00:00:04.78 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))



The optimizer sees that the expression involving month and year matches the virtual column definition, and evaluates the two date expression to produce simple constants and gives us a cardinality estimate in the right ballpark.

Conclusion

Cursor sharing and “big” queries don’t mix. If you have queries that have to manipulate large volumes of data then the overhead of optimising each one separately is likely to be insignificant, and the threat of cardinality errors introduced by bind variables being re-used could be significant.

If you have to make use of an existing (bad) table definition, and can’t managed to write predicates that allow the optimizer to use existing column statistics, remember that you might be able to create a virtual (and invisible) column that captures the necessary definition thereby allowing you to give Oracle some statistics about the necessary predicate.

Footnote

In case you didn’t want to scan through the ODC page, here’s the full SQL Monitor output for the original query:


Global Stats
==============================================================================================
| Elapsed |   Cpu   |    IO    | Cluster  |  Other   | Buffer | Read | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes | Reqs  | Bytes |
==============================================================================================
|     320 |      76 |      140 |       39 |       66 |     8M | 257K |   2GB |  1528 | 306MB |
==============================================================================================
 
 
SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|  -> 0 | SELECT STATEMENT                |                         |         |       |       180 |   +142 |     1 |        0 |       |       |       |       |      |      |          |                             |          |
|  -> 1 |   SORT UNIQUE                   |                         |    1093 | 52574 |       180 |   +142 |     1 |        0 |       |       |   534 | 107MB |   2M | 113M |     0.94 | Cpu (3)                     |          |
|  -> 2 |    NESTED LOOPS                 |                         |    1093 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 3 |     NESTED LOOPS                |                         |    1118 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 4 |      HASH JOIN RIGHT SEMI       |                         |    1118 | 52238 |       189 |   +133 |     1 |       3M |       |       |       |       | 153M |      |     1.57 | Cpu (5)                     |          |
|     5 |       VIEW                      |                         |    157K | 31145 |         9 |   +134 |     1 |       2M |       |       |       |       |      |      |          |                             |          |
|     6 |        WINDOW SORT              |                         |    157K | 31145 |        57 |    +86 |     1 |       4M |  3777 | 199MB |   994 | 199MB |      |      |     3.14 | Cpu (5)                     |     100% |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | direct path read temp (5)   |          |
|     7 |         HASH JOIN               |                         |    157K | 29653 |        50 |    +85 |     1 |       4M |       |       |       |       |      |      |     1.26 | Cpu (4)                     |          |
|     8 |          VIEW                   |                         |   81771 | 23273 |         1 |    +86 |     1 |       1M |       |       |       |       |      |      |          |                             |          |
|     9 |           HASH UNIQUE           |                         |   81771 | 23273 |        75 |    +12 |     1 |       1M |       |       |       |       |      |      |     1.89 | Cpu (6)                     |          |
|    10 |            FILTER               |                         |         |       |        78 |     +9 |     1 |      11M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
|    12 |          INDEX FULL SCAN        | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |        49 |    +86 |     1 |       8M |       |       |       |       |      |      |    12.58 | gc cr block 2-way (37)      |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | gc current block 2-way (3)  |          |
| -> 13 |       INDEX FULL SCAN           | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |       180 |   +142 |     1 |       7M |       |       |       |       |      |      |     0.63 | Cpu (2)                     |          |
| -> 14 |      INDEX RANGE SCAN           | IDX4_LPL_BETA_CUST_RLTN |       1 |     1 |       181 |   +141 |    3M |       3M | 75759 | 592MB |       |       |      |      |    23.27 | gc current grant 2-way (1)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | Cpu (21)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (52)  |          |
| -> 15 |     TABLE ACCESS BY INDEX ROWID | IMPL_LPL_BETA_CUST_RLTN |       1 |     1 |       180 |   +142 |    3M |       3M |  177K |   1GB |       |       |      |      |    29.56 | Cpu (12)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (81)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file sequential read (1) |          |
=================================================================================================================================================================================================================================

Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018

Online Apps DBA - Mon, 2018-12-10 06:45

Do You know what is the OCI: Compute Update in Dec 2018? Upgrade yourself with new Information in My [Blog] Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018 At: https://k21academy.com/oci23 & grab the knowledge about: ✔ Things You Should Know About Compute ✔ OCI Compute New Features: DEC 2018 ✔ Changes in VCN (Virtual Cloud […]

The post Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Disabling Spring Security if you don't require it

Pas Apicella - Sun, 2018-12-09 17:42
When using Spring Cloud Services Starter Config Client dependency for example Spring Security will also be included (Config servers will be protected by OAuth2). As a result this will also enable basic authentication to all our service endpoints on your application which may not be the desired result here if your just building a demo for example

Add the following to conditionally disable security in your Spring Boot main class
  
package com.example.employeeservice;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication {

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@Configuration
static class ApplicationSecurity extends WebSecurityConfigurerAdapter {

@Override
public void configure(WebSecurity web) throws Exception {
web
.ignoring()
.antMatchers("/**");
}
}
}
Categories: Fusion Middleware

Polymorphic Table Functions – Part 2

Bar Solutions - Sun, 2018-12-09 00:04

In my previous post I wrote about a possible use-case for Polymorphic Table Functions. I also wrote that I had some ideas to implement extra functionality to make the function more generic.
After attending DOAG Konferentz und ausstellung 2018 and UKOUG Tech18 I finally came around to publishing my new version.
Lets start off with a table to hold my data, I chose to use the NATO phonetic alphabet for this:

create table t_alphabet
(thecount number
,alphabet varchar2(4000)
)
/
begin
  insert into t_alphabet(alphabet, thecount) values ('Alfa;Bravo;Charlie;Delta;Echo;Foxtrot;Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel;India;Juliett;Kilo;Lima;Mike;November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar;Papa;Quebec;Romeo;Sierra;Tango;Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor;Whiskey;;X-ray;Yankee;;Zulu',5);
  insert into t_alphabet(alphabet, thecount) values ('Alfa.Bravo.Charlie.Delta.Echo.Foxtrot.Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel.India.Juliett.Kilo.Lima.Mike.November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar.Papa.Quebec.Romeo.Sierra.Tango.Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor.Whiskey..X-ray.Yankee..Zulu',5);
  commit;
end;
/

First of all, I learned from attending the presentations by attending presentations by Andrej Pashchenko: Polymorphic Table Functions in 18c: Einführung und Beispiele and Keith Laker: Patterns and Use Cases For Polymorphic Tables that there is no need for global (package) variables to have access to the parameters supplied. The more I can rely on Oracle to take care of the value of variables, the better I like it.
I won’t bore you with all the intermediate versions of the code, lets jump straight into the ‘final’ result.
The package:

create or replace package separated_ptf is
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';');
end separated_ptf;
/

The package body:

create or replace package body separated_ptf as
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t as
    -- metadata for column to add
    l_new_col dbms_tf.column_metadata_t;
    -- table of columns to add
    l_new_cols dbms_tf.columns_new_t;
    -- make sure the column to split is in the correct format (uppercase with doublequotes)
    l_coltosplit dbms_quoted_id := dbms_assert.enquote_name(str => coltosplit, capitalize => true);
  begin
    -- if the coltosplit parameter is null then
    if coltosplit is null then
      -- Mark the first column ReadOnly and don't display it anymore
      tab.column(1).for_read := true;
      tab.column(1).pass_through := false;
    else
      -- if the coltosplit parameter is not null then
      -- check every column from the source table
      for indx in tab.column.first .. tab.column.last loop
        -- if this is the column we want to split then
        if tab.column(indx).description.name = l_coltosplit then
          -- Mark this column ReadOnly and don't display it anymore
          tab.column(indx).for_read := true;
          tab.column(indx).pass_through := false;
        end if;
      end loop;
    end if;
    -- Add the new columns, as specified in the cols parameter
    for indx in 1 .. cols.count loop
      -- define metadata for column named cols(indx)
      -- that will default to a datatype of varchar2 with
      -- a length of 4000
      l_new_col := dbms_tf.column_metadata_t(name => cols(indx));
      -- add the new column to the list of columns new columns
      l_new_cols(l_new_cols.count + 1) := l_new_col;
    end loop;
    -- Instead of returning NULL we will RETURN a specific
    -- DESCRIBE_T that adds new columns
    return dbms_tf.describe_t(new_columns => l_new_cols);
  end;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';') is
    -- define a table type of varchar2 tables
    type colset is table of dbms_tf.tab_varchar2_t index by pls_integer;
    -- variable to hold the rowset as retrieved
    l_rowset dbms_tf.row_set_t;
    -- variable to hold the number of rows as retrieved
    l_rowcount pls_integer;
    -- variable to hold the number of put columns
    l_putcolcount pls_integer := dbms_tf.get_env().put_columns.count;
    -- variable to hold the new values
    l_newcolset colset;
    -- get the name of the column to be split from the get columns
    l_coltosplit dbms_quoted_id := trim('"' from dbms_tf.get_env().get_columns(1).name);
  begin
    --    dbms_tf.Trace(dbms_tf.Get_Env);
    -- fetch rows into a local rowset
    -- at this point the rows will have columns
    -- from the the table/view/query passed in
    dbms_tf.get_row_set(l_rowset, l_rowcount);
    -- for every row in the rowset...
    for rowindx in 1 .. l_rowcount loop
      -- for every column
      for colindx in 1 .. l_putcolcount loop
        -- split the row into separate values
        --  FUNCTION Row_To_Char(rowset Row_Set_t,
        --                       rid    PLS_INTEGER,
        --                       format PLS_INTEGER default FORMAT_JSON)
        --           return VARCHAR2;
        -- splitting the regexp way: http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
        l_newcolset(colindx)(rowindx) := trim(separator from regexp_substr(json_value(dbms_tf.row_to_char(l_rowset, rowindx), '$.' || l_coltosplit)
                                                           ,'[^' || separator || ']*' || separator || '{0,1}'
                                                           ,1
                                                           ,colindx));
      end loop; -- every column
    end loop; -- every row in the rowset
    -- add the newly populated columns to the rowset
    for indx in 1 .. l_putcolcount loop
      dbms_tf.put_col(columnid => indx, collection => l_newcolset(indx));
    end loop;
  end;
end separated_ptf;
/

The wrapper function:

create or replace function separated_fnc(p_tbl      in table
                                        ,cols       columns default null
                                        ,coltosplit in varchar2 default null
                                        ,separator  in varchar2 default ';') return table
  pipelined row polymorphic using separated_ptf;
/

Back to the improvements I suggested earlier.
Supporting duplicate separators:
Using the Regular Expression ‘[^;]+’ didn’t make this possible, because double ; (;;) would be regarded as one. So this had to be changed into ‘[^;]+;{0,1}’. This expression says (in my words): find all the characters which are not a ; followed by 0 or 1 ;. Since this will result in a string with a ; at the end I had to add the trim function around it.
Making the column to be split up a parameter
To find the column to be split I need to make the value look the same as the tab.column(indx).description.name value I can of course add quotes around the parameter myself, but I think it is better to use the built in sys.dbms_assert.enquote_name function to do this. If the value is not supplied or null I just (try to) split the first column.
Making the separator character a parameter
This seemed like an easy task, just replace every ; in my code by a variable, but when running a couple of tests I received an error which put me on the wrong path.
Calling the function using positioned parameters works like a charm, just as you would expect it:
select *
from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),’alphabet’,’.’)
/

  THECOUNT FIRST                SECOND               THIRD                FOURTH               FIFTH                SIXTH                SEVENTH
---------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
         7 Alfa;Bravo;Charlie;D                                                                                                          
           elta;Echo;Foxtrot;Go                                                                                                          
           lf                                                                                                                            
         7 Hotel;India;Juliett;                                                                                                          
           Kilo;Lima;Mike;Novem                                                                                                          
           ber                                                                                                                           
         7 Oscar;Papa;Quebec;Ro                                                                                                          
           meo;Sierra;Tango;Uni                                                                                                          
           form                                                                                                                          
         5 Victor;Whiskey;X-ray                                                                                                          
           ;Yankee;Zulu                                                                                                                  
         7 Alfa                 Bravo                Charlie              Delta                Echo                 Foxtrot              Golf
         7 Hotel                India                Juliett              Kilo                 Lima                 Mike                 November
         7 Oscar                Papa                 Quebec               Romeo                Sierra               Tango                Uniform
         5 Victor               Whiskey                                   X-ray                Yankee                                    Zulu

8 rows selected

But when I tried to use named parameters like this:

select *
  from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),coltosplit => 'alphabet',separator => '.')
/

it resulted in the following error:

ORA-62573: new column (FIRST) is not allowed with describe only polymorphic table function

After the DOAG conference I looked at this together with Chris Saxon, but we couldn’t find what is going on. So we contacted Keith Laker and he told me that it was bug in the version (18.3.0.0.0) of the database I am using and that it should be fixed in an upcoming (patch) release.
If you know about this behavior, I think it is quite a useful function (and with the upcoming fixes it will become even more useful).

If you have any suggestions or maybe ideas for other use cases for Polymorphic Table Functions, please don’t hesitate to use the comments.

Cloud Control 13c Release 3 Database Template

Michael Dinh - Sat, 2018-12-08 18:20

Got excited to find:
Creating a Database Instance with Preconfigured Repository Using Database Templates

Templates can be download at:
Database Template (with EM 13.3.0.0 repository pre-configured) for Installing Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0)

Got disappointed to find template is only available for 12.1.0.2.0 DB and 18.1.0.0.0 DB.

Creating a Database Instance with Preconfigured Repository Using Database Templates Caution:

Make sure that the 18.1.0.0.0_Database_Template_for_EM13_3_0_0_0_.zip file is used only for Enterprise Manager Cloud Control 13c Release 3.

It looks like the template version must match exactly from an example below.

EM 13c: Creating an Enterprise Manager 13c Cloud Control Repository using a Database Template Fails: ORA-00603: ORACLE server session terminated by fatal error (Doc ID 2291220.1)	

While the 12.2.0.1 version of the database is supported for EM 13.2 repository use, a database template for the EM 13.2 repository database is only supplied for the 12.1.0.2 version of the database, and they are specific for the version:

Oracle skipped 12.2 DB version when creating DB template which is disappointing.

Does database really need to be on 12.2?

First immersion in the Docker Conference EU 2018

Yann Neuhaus - Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Pas Apicella - Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

AWS: IAM & security - Best practices: Using a non-root user

Dietrich Schroff - Fri, 2018-12-07 18:35
After my successful solutions architect practice i knew that i had to take care of my shortcomings in security.
So i decided to visit the IAM (Identity and Access Managemen) of AWS:

So let's move to IAM users:
Click an "add user"

Then insert a "user name", choose an access type and click next:

Then you have to create the first group:

 I created a group with full AWS access:



Then move on with creating the user:

 Additional Tags:
 And finally click "create user"
 This will show you a page with an AWS management console URL:

Use this URL to login with the new user:
 (i had to change the passwort - the checkbox "require password reset")

And then i am logged into my AWS Management console with this non-root user:


If you want to login with your root user, you have to use the link blow the "sign in" button:

Pages

Subscribe to Oracle FAQ aggregator