Feed aggregator

architecture

Tom Kyte - Tue, 2017-07-04 22:06
1)How an insert statement work internal of oracle architecture how data is going to stored in data block 2)How update statement works internal of oracle architecture how data is going to update a row
Categories: DBA Blogs

Get max permutation

Tom Kyte - Tue, 2017-07-04 22:06
I have a table with 1 column. I need max permutation for each row values. COLUMN1 -------- a b c d For above table output should be: a_b a_c a_d b_c b_d c_d Your response in this regard is deeply appreciated.
Categories: DBA Blogs

How to connect SQLPlus without tnsnames.ora

Tom Kyte - Tue, 2017-07-04 22:06
Hi I am trying to use SQLPlus to connect to database directly using just the connect string (ie without referencing to tnsnames.ora) I have this in my tnsnames.ora POD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(H...
Categories: DBA Blogs

Validate Performance Improvement Using Query Folding Feature in Power BI

Ittichai Chammavanijakul - Tue, 2017-07-04 19:27

I’ve been using Power BI for a couple months now, not as a developer, but as a system architecture. I may not deal with dashboard and report development on a daily basis, however, I, as an end user, use Power BI extensively to monitor Azure and Power BI usage including audit and billing. I would like to learn more about this tool to its nuts and bolts. The intention of this blog series is to document and share what I’ve learned in this journey.

My first area I’d like to explore is performance as it has been most talked within our developer circle and Power BI community. One thing I’ve seen quite often when searching for Power BI performance improvement is to utilize the Query Folding, which basically, pushes query logics and filters to the database. Let’s database do what it does best – extracting and processing data before sending back to Power BI Desktop client. If this is done correctly, this will reduce workload that Power BI has to do on the client side.

There are a couple well-written articles about Query Folding and its benefits already. There is no need for me to repeat it.

Part of this exercise here is to learn how to measure expected performance improvement when using Query Folding.

Setup:

I’m connecting to a 1.3-million-row Oracle database table.

SQL> SELECT COUNT(*) FROM TABLE_ADDRESS_T;
COUNT(*)
----------
1312127

 

I created a series of transformations with and without supporting Query Folding. The first two – Filtered Rows and Uppercased Text – in the sample below are the ones supporting Query Folding. We can confirm it by seeing that the View Native Query is available if right-clicking from the last one.

Or we can confirm it by viewing query itself to see that the native SQL query is re-written to include those transformations.

For the next one, the Split Column by Delimiter does not support Query Folding. Note that the View Native Query is now disabled.

The plan is to run these transformations so most will be utilizing the Query Folding then take a measurement. Then move the non-supported one up to the top so the rest will not be able to utilize the Query Folding thus transformations will be processed on the client.

If I would run this small set of transformations, we’d probably not see much difference in term of runtime so I added more transformations just by manually modifying the M Query as seen below – in this case, just adding alternative upper- and lower-case transformations.

To measure the total run time, I’m using the Rui Romano’s Power BI Desktop Trace Logs Analyser, which will read and visualize Power BI Desktop’s diagnostic trace files.

Before each run, the cache will be cleared and enable tracing will be enabled. Once done in each run, the tracing will be disabled.

 

Outputs:

A. 1st Run data refresh utilizing Query Folding

B. 1st Run data refresh without utilizing Query Folding

 

C. 2nd Run data refresh utilizing Query Folding

D. 2nd Run data refresh without utilizing Query Folding

The outcome clearly confirms our expectation. I think the number difference would be greater if more transformations especially complex ones are added.

In my future post, I’d like to explore further to understand what these Action Details are, though now I’m not confident that I could find any. It seems like this information may not be shared with customers according to the discussion here.

 

Categories: DBA Blogs

Fire an employee? It's that easy!

Flavio Casetta - Tue, 2017-07-04 15:45
Categories: DBA Blogs

KPI Dashboards for Retail Industry

Nilesh Jethwa - Tue, 2017-07-04 12:20

The retail industry is highly competitive. It is one of the fastest growing industries worldwide.

Based on the 2015 Global Powers of Retailing Report, the US has

seventy-six of the world’s largest retailing companies. With the growing number of retailers, staying on top will be a tough job.

Retailers who want to rise above their competitors must know all the details about their company. This is where a Key Performance Indicator (KPI) comes to place.

KPIs are helpful in evaluating and monitoring a company’s performance in achieving certain targets or goals. KPIcan also help retailers come up with right business decisions.

Having a KPI dashboard which contains the right data at the right time can help improve the productivity and performance of your company.

While there are many KPIs that a retailer can keep track of, we have compiled ten of the most helpful KPIs for any retail business.

Read more at http://www.infocaptor.com/dashboard/kpi-dashboards-for-retail-industry

SSL/TLS: How to choose your cipher suite

Amis Blog - Tue, 2017-07-04 11:00

For SSL/TLS connections, cipher suites determine for a major part how secure the connection will be. A cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings (here). But what does this mean and how do you choose a secure cipher suite? The area of TLS is quite extensive and I cannot cover it in its entirety in a single blog post but I will provide some general recommendations based on several articles researched online. At the end of the post I’ll provide some suggestions for strong ciphers for JDK8.

Introduction

First I’ll introduce what a cipher suite is and how it is agreed upon by client / server. Next I’ll explain several of the considerations which can be relevant while making a choice of cipher suites to use.

What does the name of a cipher suite mean?

The names of the cipher suites can be a bit confusing. You see for example a cipher suite called: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 in the SunJSSE list of supported cipher suites. You can break this name into several parts:

  • TLS: transport layer security (duh..)http://www.jscape.com/blog/cipher-suites
  • ECDHE: The key exchange algoritm is ECDHE (Elliptic curve Diffie–Hellman, ephemeral).
  • ECDSA: The authentication algorithm is ECDSA (Elliptic Curve Digital Signature Algorithm). The certificate authority uses an ECDH key to sign the public key. This is what for example Bitcoin uses.
  • WITH_AES_256_CBC: This is used to encrypt the message stream. (AES=Advanced Encryption Standard, CBC=Cipher Block Chaining). The number 256 indicates the block size.
  • SHA_384: This is the so-called message authentication code (MAC) algorithm. SHA = Secure Hash Algorithm. It is used to create a message digest or hash of a block of the message stream. This can be used to validate if message contents have been altered. The number indicates the size of the hash. Larger is more secure.

If the key exchange algorithm or the authentication algorithm is not explicitly specified, RSA is assumed. See for example here for a useful explanation of cipher suite naming.

What are your options

First it is a good idea to look at what your options are. This is dependent on the (client and server) technology used. If for example you are using Java 8, you can look here (SunJSSE) for supported cipher suites. In you want to enable the strongest ciphers available to JDK 8 you need to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files (here). You can find a large list of cipher suites and which version of JDK supports them (up to Java 8 in case of the Java 8 documentation). Node.js uses OpenSSL for cipher suite support. This library supports a large array of cipher suites. See here.

How determining a cipher suite works

They are listed in preference order. How does that work? During the handshake phase of establishing an TLS/SSL connection, the client sends supported cipher suites to the server. The server chooses the cipher to use based on the preference order and what the client supports.


This works quite efficiently, but a problem can arise when

  • There is no overlap in ciphers the client and server can speak
  • The only overlap between client and server supported cipher is a cipher which provides poor or no encryption

This is illustrated in the image below. The language represents the cipher suite. The order/preference specifies the encryption strength. In the first illustration, client and server can both speak English so the server chooses English. In the second image, the only overlapping language is French. French might not be ideal to speak but the server has no other choice in this case but to accept speaking French or to refuse talking to the client.

Thus it is a good practice to for the server only select specific ciphers which conform to your security requirements, but do of course take client compatibility into account.

How to choose a cipher suite Basics Check which cipher suites are supported

There are various mechanisms to check which ciphers are supported. For cloud services or websites you can use SSLLabs. For internal server checking, you can use various scripts available online such as this one or this one.

TLS 1.2

Of course you only want TLS 1.2 cipher suites since older TLS and SSL versions contain security liabilities. Within TLS 1.2 there is a lot to choose from. OWASP provides a good overview of which ciphers to choose here (‘Rule – Only Support Strong Cryptographic Ciphers’). Wikipedia provides a nice overview of (among other things) TLS 1.2 benefits such as GCM (Galois/Counter Mode) support which provides integrity checking.

Disable weak ciphers

As indicated before, if weak ciphers are enabled, they might be used, making you vulnerable. You should disable weak ciphers like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name. See for example here and here. For example, do not use DSA/DSS: they get very weak if a bad entropy source is used during signing (here). For the other weak ciphers, similar liabilities can be looked up.

How to determine the key exchange algorithm Types

There are several types of keys you can use. For example:

  • ECDHE: Use elliptic curve diffie-hellman (DH) key exchange (ephemeral). One key is used for every exchange. This key is generated for every request and does not provide authentication like ECDH which uses static keys.
  • RSA: Use RSA key exchange. Generating DH symetric keys is faster than RSA symmetric keys. DH also currently seems more popular. DH and RSA keys solve different challenges. See here.
  • ECDH: Use elliptic curve diffie-hellman key exchange. One key is for the entire SSL session. The static key can be used for authentication.
  • DHE: Use normal diffie-hellman key. One key is used for every exchange. Same as ECDHE but a different algorithm is used for the calculation of shared secrets.

There are other key algorithms but the above ones are most popular. A single server can host multiple certificates such as ECDSA and RSA certificates. Wikipedia is an example. This is not supported by all web servers. See here.

Forward secrecy

Forward secrecy means that is a private key is compromised, past messages which are send cannot also be decrypted. Read here. Thus it is beneficial to have perfect forward secrecy for your security (PFS).

The difference between ECDHE/DHE and ECDH is that for ECDH one key for the duration of the SSL session is used (which can be used for authentication) while with ECDHE/DHE a distinct key for every exchange is used. Since this key is not a certificate/public key, no authentication can be performed. An attacked can use their own key (here). Thus when using ECDHE/DHE, you should also implement client key validation on your server (2-way SSL) to provide authentication.

ECDHE and DHE give forward secrecy while ECDH does not. See here. ECDHE is significantly faster than DHE (here). There are rumors that the NSA can break DHE keys and ECDHE keys are preferred (here). On other sites it is indicated DHE is more secure (here). The calculation used for the keys is also different. DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. ECDHE can be configured. ECDHE-ciphers must not support weak curves, e.g. less than 256 bits (see here).

Certificate authority

The certificate authority you use to get a certificate from to sign the key can have limitations. For example, RSA certificates are very common while ECDSA is gaining popularity. If you use an internal certificate authority, you might want to check it is able to generate ECDSA certificates and use them for signing. For compatibility, RSA is to be preferred.

How to determine the message encryption mechanism

As a rule of thumb: AES_256 or above is quite common and considered secure. 3DES, EDE and RC4 should be avoided.

The difference between CBC and GCM

GCM provides both encryption and integrity checking (using a nonce for hashing) while CBC only provides encryption (here). You can not use the same nonce for the same key to encrypt twice when using GCM. This protects against replay attacks. GCM is supported from TLS 1.2.

How to choose your hashing algorithm

MD5 (here) and SHA-1 (here) are old and should not be used anymore. As a rule of thumb, SHA256 or above can be considered secure.

Finally Considerations

Choosing a cipher suite can be a challenge. Several considerations play a role in making the correct choice here. Just to name a few;
Capabilities of server, client and certificate authority (required compatibility); you would choose a different cipher suite for an externally exposed website (which needs to be compatible with all major clients) than for internal security.

  • Encryption/decryption performance
  • Cryptographic strength; type and length of keys and hashes
  • Required encryption features; such as prevention of replay attacks, forward secrecy
  • Complexity of implementation; can developers and testers easily develop servers and clients supporting the cipher suite?

Sometimes even legislation plays a role since some of the stronger encryption algorithms are not allowed to be used in certain countries (we will not guess for the reason but you can imagine).

Recommendation

Based on the above I can recommend some strong cipher suites to be used for JDK8 in preference order:

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
  • TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

My personal preference would be to use TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 as it provides

  • Integrity checking: GCM
  • Perfect forward secrecy: ECDHE
  • Uses strong encryption: AES_256
  • Uses a strong hashing algorithm: SHA384
  • It uses a key signed with an RSA certificate authority which is supported by most internal certificate authorities.

Since ECDHE does not provide authentication, you should tell the server to verify client certificates (implement 2-way SSL).

The post SSL/TLS: How to choose your cipher suite appeared first on AMIS Oracle and Java Blog.

JDeveloper 12.2.1.2 Patch for Transient Expression Compilation Infinite Loop

Andrejus Baranovski - Tue, 2017-07-04 10:26
If you are using JDeveloper 12.2.1.1 or 12.2.1.2, probably you run into transient expression compilation infinite loop issue. Infinite loop happens when you open ADF BC project and navigate to VO, which contains Groovy expressions. JDeveloper starts to print repeating message in the log - compiling TransientExpression and soon at some point JDeveloper window closing down without any feedback:


For those of you, who are not aware - there is a patch for this issue. Patch can be downloaded from Oracle Support, search for Patch: 25218838 (there is one for JDEV 12.2.1.2 and another one for JDEV 12.2.1.1):


If its your first time applying Oracle patch - no worries, process is very simple and smooth. Extract download patch zip archive first. Next setup ORACLE_HOME environment variable, point to root folder of JDEV install:


Once environment variable is set, run OPatch by executing opatch apply from the directory where patch archive was extracted. You can reference OPatch by direct path:


Hopefully fix provided by this patch will be included into next JDEV version by default.

Common Questions and Misconceptions in The Data Science Field

Rittman Mead Consulting - Tue, 2017-07-04 09:06

There are many types of scenarios in which data science could help your business. For example, customer retention, process automation, improving operational efficiency or user experience.

It is not however always initially clear which questions to concentrate on, or how to achieve your aims.

This post presents information about the type of questions you could address using your data and common forms of bias that may be encountered.

Types of Question
  • Descriptive: Describe the main features of the data, no implied meaning is inferred. This will almost always be the first kind of analysis performed on the data.

  • Exploratory: Exploring the data to find previously unknown relationships. Some of the found relationships may define future projects.

  • Inferential: Looking at trends in a small sample of a data set and extrapolating to the entire population. In this type of scenario you would end up with an estimation of the value and an associated error. Inference depends heavily on both the population and the sampling technique.

  • Predictive: Look at current and historical trends to make predictions about future events. Even if x predicts y, x does not cause y. Accurate predictions are hard to achieve and depend heavily on having the correct predictors in the data set. Arguably more data often leads to better results however, large data sets are not always required.

  • Causal: To get the real relationship between variables you need to use randomised control trials and measure average effects. i.e. if you change x by this much how does y change. Even though this can be carried out on observed data huge assumptions are required and large errors would be introduced into the results.

Biases in data collection or cleaning

It is very easy to introduce biases into your data or methods if you are not careful.
Here are some of the most frequent:

  • Selection/sampling bias: If the population selected does not represent the actual population, the results are skewed. This commonly occurs when data is selected subjectively rather than objectively or when non-random data has been selected.

  • Confirmation bias: Occurs when there is an intentional or unintentional desire to prove a hypothesis, assumption, or opinion.

  • Outliers: Extreme data values that are significantly out of the normal range of values can completely bias the results of an analysis. If the outliers are not removed in these cases the results of the analysis can be misleading. These outliers are often interesting cases and ought to be investigated separately.

  • Simpson's Paradox: A trend that is indicated in the data can reverse when the data is split into comprising groups.

  • Overfitting: Involves an overly complex model which overestimates the effect/relevance of the examples in the training data and/or starts fitting to the noise in the training data.

  • Underfitting: Occurs when the underlying trend in the data is not found. Could occur if you try to fit a linear model to non linear data or if there is not enough data available to train the model.

  • Confounding Variables: Two variables may be assumed related when in fact they are both related to an omitted confounding variable. This is why correlation does not imply causation.

  • Non-Normality: If a distribution is assumed to be normal when it is not the results may be biased and misleading.

  • Data Dredging: This process involves testing huge numbers of hypotheses about a single data set until the desired outcome is found.
Citations:

Comics from Dilbert Comics By Scott Adams.
Spurious Correlations from http://tylervigen.com/spurious-correlations.

Insights Lab

To learn more about the Rittman Mead Insights Lab please read my previous blog post about our methodology.

Or contact us at info@rittmanmead.com

Categories: BI & Warehousing

Full Table Scan With Partition Hint

Tom Kyte - Tue, 2017-07-04 03:46
Hi, My situation is this: 1.) I have a table that is partitioned on a given column (we'll call the partition p1). 2.) After loading the p1 partition, stats are run for that partition only (not the whole table). 3.) Once stats are run, a selec...
Categories: DBA Blogs

Reporting Tools and Dashboard for Operations Management

Nilesh Jethwa - Mon, 2017-07-03 12:44

As an operations manager, you play the role of an overseer. You are monitoring the day-to-day operations of your store or business, keeping track of factory efficiency, optimizing logistics and safety. You are practically keeping watch of everything that happens in the business and you find yourself just everywhere.

Being a highly functional entity in your business requires that you have a continuous feed of important information about your business. You need data to tell you everything is running smoothly, or tell you something is not going well. This is where a dashboard can help.

What can a dashboard give you?

With the use of reporting tools, such as dashboards, you can create a data reporting system that can help you have an overall view of your operations whenever you need it.

With a dashboard, you can drill down into important details and identify issues before they spin out of control. Once issues have been identified, you can quickly alert stakeholders about them and help them determine necessary measures to take.

Most dashboards systems are customizable and can be operated through a self-service platform. You can design yours in such a way that you can combine multiple data from different sources and look for correlations among data to help you come up with better decisions for your business.

Operational Dashboards

There are three types of dashboards: strategic, operational and analytic dashboards. The kind of dashboard for people running business’s daily operations is the operational type.

Whereas strategic dashboards are used to monitor key performance indicators and analytic dashboards are used for analyzing huge volumes of data to identify trends and predict outcomes, operational dashboards are used to monitor day-to-day processes.

These processes obviously change minute by minute and the dashboard is designed to capture KPIs and metrics associated with these processes. You expect this type of dashboard to be able to give you very frequent updates. Used to monitor short-time progress towards identified goals, dashboards like this are engineered in such a way that they can be viewed several times during the day giving users different data for each view.

 

Read more at http://www.infocaptor.com/dashboard/reporting-tools-and-dashboard-for-operations-management

How to start with Amazon cloud server

Amis Blog - Mon, 2017-07-03 08:56

Just created an Amazon account and willing to create a first server? Use the interactive guide (Launch Instance button) to create your own oracle server within 5 minutes of time. Hereby practical notes to create a new instance.

After login to AWS select a region nearby, this for the speed of network traffic. At the upper right front of the webpage you can select a region. When you want to develop, the US East region is the region you have to select. Creating an Oracle environment may be done in a region of your choice. With Amazon it is also good to be known that pricing per server is different between regions. Before start you can check on the following link where you can get the best price for you environment (on demand).

https://aws.amazon.com/ec2/pricing/on-demand/

  • For a first Oracle environment you better choose an existing Amazon Machine Image (AMI). To create a new instance press the button Launch Instance on the dashboard. Within several steps you will be guided to create a new Instance. For this trial we show you how to create an Oracle environment for test usage.
  •  For our first server I use an predefined Image created by a colleague, there are several predefined Machine Images available. On the first tab we choose an linux image. When creating a server for an Oracle database, it is also possible to start with a RDS (oracle database) On the second tab, we can select an Instance type. It depends on the software you want to install on the instance, for Oracle middleware applications such as Database or weblogic an instance with 2 cores and 4 or 8 GB of memory is eligible. Below the explanation of the codes used by Amazon:

T<number> generic usage for development and test environments

M<number> generic usage for production environements

C<number> CPU intensive usage

G<number> Grafical solution such as videostreaming

R<number> Memory intensive systems

I<number> IO intensive systems.

Costs per instance per hour are on the website, see above for link

  •  On the 3rd tab only the IAM role has to be set. Create a new one if not having already one. When creating a new one select one for Amazon EC2 and then AdministratorAccess for your own environment. When saved, you have to push on the refresh button before it is available in the dropdown box. Leave everything else as is to avoid additional costs.5.
  • On the 4th tab you can select additional storage to your instance. Select a different disk instead of enlarge the existing disk This is better for an Oracle environment. So press the Add New Volume button for more disk space. Volume type EBS is right, only change the size you want to use. Volume type GP2 stands for General Purpose (see picture below).

  • Appending an extra volume to an instance will remain in an reusable instance after reboot, else you have to install the software of Oracle again.
  • The next tab is for creating tags to your Oracle environment. The TAG Name will also directly be displayed when using the dashboard for looking to the instances. Other tags are optional but very helpful for colleagues, department, name or id of the owner are very helpful for colleagues.
  • On the 6th tab you have to configure a security group, you want to avoid access from anyone, everywhere by default port 22. When selecting on the source drill down menu the My-IP option only your own IP will be allowed to connect by port 22. Even other ports can be configured. For Oracle database or weblogic different ports will be used, so you have to configure them also.
  • On the last tab review and launch by pressing the launch button, you will be asked for selecting or creating a key, when it is your first server, create a key for using it putty or other ssh applications. A private key will be generated, this one you have to store carefully, because this is given once. Using the key with putty, use the following link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html
  • You might also reuse a key when you already have one
  • Your system will be ready after several minutes for using (state = running). You first have to run a yum update on the system by typing : sudo yum update –y
  • Now the system is ready to install Oracle software and create an Oracle database or weblogic environment

The post How to start with Amazon cloud server appeared first on AMIS Oracle and Java Blog.

OAC: Essbase – Incremental Loads / Automation

Rittman Mead Consulting - Mon, 2017-07-03 08:56

I recently detailed data load possibilities with the tools provided with Essbase under OAC here. Whilst all very usable, my thoughts turned to systems that I have worked on and how the loads currently work, which led to how you might perform incremental and / or automated loads for OAC Essbase.

A few background points:

  • The OAC front end and EssCS command line tools contain a ‘clear’ option for data, but both are full data clears – there does not seem to be a partial or specifiable ‘clear’ available.
  • The OAC front end and EssCS command line tools contain a ‘file upload’ function for (amongst other things) data, rules, and MAXL (msh) script files. Whilst the front-end operation has the ability to overwrite existing files, the EssCS Upload facility (which would be used when trying to script a load) seemingly does not – if an attempt is made to upload a file that already exists, an error is shown.
  • The OAC ‘Job’ facility enables a data load to be conducted with a rules file; the EssCS Dataload function (which would be used when trying to script a load) seemingly does not.
  • MAXL still exists in OAC, so it is possible to operate at Essbase ‘command level’

Whilst the tools that are in place all work well and are fine for migration or other manual / adhoc activity, I am not sure what the intended practice might be around some ‘real world’ use cases: a couple of things that spring to mind are

  • Incremental loads
  • Scheduled loads
  • Large ASO loads (using buffers)
Incremental Loads

It is arguably possible to perform an incremental load in that

  • A rules file can be crafted in on-prem and uploaded to OAC (along with a partial datafile)
  • Loads appear to be conducted in overwrite mode, meaning changed and new records will be handled ok

It is possible that (eg) a ‘current month’ data file could be loaded and reloaded to form an incremental load of sorts. The problem here will come if data is deleted for a particular member combination in the source from one day to the next – with no partial clear (eg, of current month data) seemingly possible, there is no way of clearing redundant values (at least for an ASO cube…for a BSO load, the ‘Clear combinations’ functionality of the load rules file can be used…although that has not yet been tested on this version).

So in the case of an ASO cube, the only option using available tools would be to ensure that ‘contra’ records are added to the incremental load file. This is not ideal, as it is another process to follow in data preparation, and would also add unnecessary zeros to the cube. For these reasons, I would generally look to effect a partial clear of the ‘slice’ being loaded before proceeding with an incremental the load.

The only way I can see of achieving this under OAC would be to take advantage of the fact that MAXL is available and effect the clear using alter database clear data.

This means that the steps required might be

  • Upload prepared incremental data file (either manually via OAC or via EssCS UploadFiles after having first deleted the existing file)
  • Upload on-prem prepared rules file (either manually via OAC or via EssCS UploadFiles after having first deleted the existing file)
  • Access the OAC server (eg via Putty), start MAXL, and run a command to clear the required slice / merge slices (if necessary)
  • In OAC, create / run a job for the specified data file / rules file

I may have missed something, but I see no obvious way of being able to automate this process with the on-board facilities.

Automating the load process

Along with the points listed above, some other facts to be aware of:

  • It is possible to manually transfer files to OAC using FTP
  • It is possible to amend the cron scheduler for the oracle user in OAC

Even bearing in mind the above, I should caveat this section by saying getting ‘under the hood’ in this way is possibly not supported or recommended, and should only be undertaken at your own risk.

Having said that…

By taking advantage of the availability of FTP and cron, it should be possible to script a solution that can run unattended, for full and incremental loads. Furthermore, data clears (full or partial) can be included in the same process, as could parallel buffer loading for ASO or any other MAXL-controllable process (within the confines of this version of Essbase).

The OAC environment

A quick look around discloses that the /u01/latency directory is roughly the equivalent of the ../user_projects/epmsystem1/EssbaseServer/essbaseserver1 (or equivalent) directory in an on-prem release in that it contains the /app ‘parent’ directory which in turn contains a subdirectory structure for all application and cube artefacts. Examining this directory for ASOSamp.Basic shows that the uploaded dataload.* files are here, along with all other files listed by the Files screen of OAC:

Note that remote connection is via the opc user, but this can be changed to oracle once connected (by using sudo su – oracle).

As oracle, these files can be manually deleted…doing so means they will no longer be found by the EssCS Listfiles command or the Files screen within OAC (once refreshed). If deleted manually, new versions of the files can be re-uploaded via either of the methods detailed above (whilst an overwrite option exists in the OAC Files facility, there seems to be no such option with the EssCS Upload feature…trying to upload a file that already exists results in an error.

All files are owned by the oracle user, with no access rights at all for the opc user that effects a remote connection via FTP.

Automation: Objectives

The objective of this exercise was to come up with a method that, unattended, would:

  • Upload received files (data, rules) to OAC from a local source
  • Put them in the correct OAC directory in a usable format
  • Invoke a process that runs a pre-load process (eg a clear), a load, and (if necessary a post load process)
  • Clear up after itself
Automation: The Process

The first job is to handle the upload of files to OAC. This could be achieved via a psftp script that uploads the entire contents of a nominated local directory:

The EssCSUpload,bat script above (which can, of course be added to a local scheduler so that it runs unattended at appointed times) passes a pre-scripted file to psftp to connect and transfer the files. Note that the opc user is used for the connection, and the files are posted to a custom-created directory, CUSTOM_receive (under the existing /u01/latency). The transferred files are also given a global ‘rw’ attribute to assist with later processing

Now the files are in the OAC environment, control is taken up there.

A shell script (DealWithUploads) is added to the oracle home directory:

This copies all the files in the nominated receiving directory to the actual required location – in this case, the main ASOSamp/Basic directory. Note the use of ‘-p’ with the copy command to ensure that attributes (ie, the global ‘rw’) are retained. Once copied, the files are deleted from the receiving directory so that they are not processed again.

Once the files are copied into place, startMAXL is used to invoke a pre-prepared msh script:

as can be seen, this clears the cube and re-imports from the uploaded file using the uploaded rules file. The clear here is a full reset, but a partial clear (in the case of ASO) can be used here instead if required

As with the ‘local’ half of the method, the DealWithUploads.sh script file can be added to the scheduler on OAC: the existing cron entries are already held in the file /u01/app/oracle/tools/home/oracle/crontab.txt; it is a simple exercise to schedule a call to this new custom script.

A routine such as this would need a good degree of refinement and hardening – the file lists for the transfers should be self-building, passwords need to be encrypted, the MAXL script should only be called if required, the posting locations for files should be content/context sensitive, etc – but in terms of feasibility testing the requirements listed above, it was successful.

This approach places additional directories and files in an environment / structure that could be maintained at any time: it is therefore imperative that some form of code control / release mechanism is employed so that it can be replaced in the event of any unexpected / uncontrollable maintenance taking place on the OAC environment that could invalidate or remove it.

Even once hardened, I think there is a considerable weak spot in this approach in that the rules file seemingly has to be crafted in an on-prem environment and uploaded: as I detailed here, even freshly-uploaded, working rules files error when an attempt is made to verify them. For now, I’ll keep looking for an alternative.

Summary

Whilst a lot of the high-level functionality is in place around data loads, often with multiple methods, I think there are a couple of detailed functionality areas that may currently require workarounds – to my mind, the addition of the ability to select & run an msh format ‘preload’ script when running a dataload Job (eg for clears) would be useful, whilst a fully functional rules file editor strikes me as important. The fact that an FTP connection is available at all is a bonus, but because this is as a non-oracle user, it is not possible to put a file in the correct place directly - the EssCS Upload faciity does this of course, but the seeming absence of an overwrite option or an additional Delete option for EssCS) somewhat limits its usefulness at this point. But can you implement a non attended, scheduled load or incremental load routine ? Sure you can.

Categories: BI & Warehousing

Remove Deploy Target from JDeveloper

Amis Blog - Mon, 2017-07-03 08:06

When you use JDeveloper and deploy a project from JDeveloper to a server, JDeveloper remembers this, so you can easily deploy to it again. You can do this by right-clicking on the project and choose for Deploy and then the numbered deploy target of your choice (or from menubar -> Build -> Deploy -> deploy target). But how to remove such a deploy target if you don’t want to use it any more?

Take for example the screenshot below where you notice two deploy targets, “1… to SOADev” and “2… to SOAAcc”.
JDeveloper Deploy Targets
Suppose I want to remove the “2… to SOAAcc”, from the list.
Unfortunately removing the server connection “SOAAcc” in JDeveloper didn’t remove it from the list of deploy targets in the projects. And I can not find any other way of removing it in JDeveloper itself.

So I scanned for files with “SOAAcc” in it and turns out that they are configured in user cached project files. On my Windows laptop these files are located in folder C:\Users\[username>]\AppData\Roaming\JDeveloper\system12.2.1.0.42.151011.0031\o.ide\projects.
You will find a cached project file for each project in JDeveloper, at least the ones that have a deploy target.
They have the same name as the project, followed by a hash and also with extension .jpr, in my case it is named “OtmTripExecution602704b7.jpr”. This file turns out to be an XML file and in there there are two “hash” elements for “SOAAcc”, see screenshot below. Removing them both removes the target in JDeveloper (be sure JDeveloper is closed when you do this).
Remove Deploy Target XML

The post Remove Deploy Target from JDeveloper appeared first on AMIS Oracle and Java Blog.

Updated PHP 7.2 PDO_OCI install 'configure' syntax

Christopher Jones - Sun, 2017-07-02 21:10

Frank Yang at Oracle has updated the PHP 7.2 'configure' option for PDO_OCI and added some of the heuristics previously only in the PHP OCI8 extension configure option. This has allowed the two extension option syntaxes to be aligned.

PDO_OCI is PHP's PDO driver for Oracle Database. PHP 7.2 is in Alpha status. In common with most other database PDO drivers, the PDO_OCI driver is part of the PHP bundle and is not on PECL.

The new PHP 7.2 PDO_OCI 'configure' syntax is like:

--with-pdo-oci[=DIR] PDO: Oracle OCI support. DIR defaults to $ORACLE_HOME. Use --with-pdo-oci=instantclient,/path/to/instant/client/lib for an Oracle Instant Client installation.

So now, for example, you could use:

./configure --with-pdo-oci=instantclient,/usr/lib/oracle/12.2/client64/lib \ --with-oci8=instantclient,/usr/lib/oracle/12.2/client64/lib

Prior to PHP 7.2, configuring PDO_OCI with the Oracle Instant Client libraries required something like:

./configure --with-pdo-oci=instantclient,/usr,12.1

The version number on the end of the option was only partially validated. In many cases it was possible to use an 'incorrect' number and still get the desired libraries to be used. (Also each new Oracle Database version required PDO_OCI code changes to update a white list).

When building with PHP 7.1 (and earlier) and using Instant Client, this old syntax is still needed.

If you know PHP OCI8 'configure' options well, you may wonder why the install simplification heuristic on Linux to automatically use the highest installed version of Instant Client RPMs wasn't ported to PDO_OCI's 'configure'. Well, I still dream that bigger improvements to PDO_OCI will occur. The current incarnation of this dream is that PDO_OCI could one day be rewritten to use ODPI-C, like cx_Oracle 6 and node-oracledb 2. If this dream ever became reality, the configure option would become simply '--with-pdo-oci' since ODPI-C doesn't need Oracle headers or libraries at build time. ODPI-C only needs Oracle client libraries in LD_LIBRARY_PATH (or PATH on Windows) at run time. But I've had the dream of various PDO_OCI improvements for at least 10 years....

For the moment Oracle still recommends using PHP OCI8 in preference to PDO_OCI because OCI8 has superior features and can also take advantage of various Oracle client library connection and caching functionality.

Understanting dba_hist views

Tom Kyte - Sun, 2017-07-02 15:06
I found the querie below on the network but I dont not understad why is substracting? select to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time", sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs...
Categories: DBA Blogs

Update a large amount of rows in the table

Tom Kyte - Sun, 2017-07-02 15:06
Hi, I have 10 million records in my table but I need to update 5 million records from that table. I checked tom sir solutions but i didn't find a total code.That have already Create table tblname as select updations from tble and after rename old...
Categories: DBA Blogs

12c MultiTenant Posts -- 5 : Flashback a PDB

Hemant K Chitale - Sun, 2017-07-02 10:31
12.2 allows FLASHBACK DATABASE for a Pluggable Database.

Note that ALTER DATABASE FLASHBACK ON is not enabled by default in a newly created database -- you must issue this command at the CDB level to enable Flashback for all the Pluggable Databases.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
OBJ_LIST
HKC_STORE_FILE
T

SQL> select count(*) from obj_list;

COUNT(*)
----------
145282

SQL> select count(*) from hkc_store_file;

COUNT(*)
----------
1

SQL> truncate table obj_list;

Table truncated.

SQL> drop table hkc_store_file;

Table dropped.

SQL>


Now, let me Flashback the PDB.

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database newpdb close;

Pluggable database altered.

SQL> flashback pluggable database newpdb to timestamp sysdate-3/1440;

Flashback complete.

SQL> alter pluggable database newpdb open;
alter pluggable database newpdb open
*
ERROR at line 1:
ORA-01113: file 19 needs media recovery
ORA-01110: data file 19:
'/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4'


SQL> alter pluggable database newpdb open resetlogs;

Pluggable database altered.

SQL>


Let me test the data.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> select count(*) from obj_list;

COUNT(*)
----------
145282

SQL> select count(*) from hkc_store_file;

COUNT(*)
----------
1

SQL>


Yes, the FLASHBACK DATABASE is successful.

What are the pre-requisites ?

1.  ALTER DATABASE FLASHBACK ON  at the CDB
2.  ARCHIVELOG at the CDB
3.  LOCAL UNDO enabled -- highly recommended else a subsequent Point In Time Recovery of the CDB may prevent OPENing the PDB
4.  OPEN RESETLOGS for the PDB

.
.
.

Categories: DBA Blogs

The APEX Community & ODTUG Kscope

Joel Kallman - Sun, 2017-07-02 10:28
Another successful Oracle Development Tools User Group (ODTUG) Kscope conference is in the books.  And like every year, the global APEX community convenes at Kscope. It is really an amazing collection of experts and first-time attendees from around the globe - from the USA, Germany, England, Australia, Croatia, Brazil, Estonia, India, Austria, Belgium, Netherlands, Canada, Poland, Finland, New Zealand and more!

Make no mistake - I appreciate and fully support many other conferences around the globe where APEX has a presence, including:

  • OUG Ireland
  • OUG Scotland
  • APEX World
  • APEX Connect
  • Great Lakes Oracle Conference
  • Oracle OpenWorld
  • Slovenian Oracle User Group
  • Croatian Oracle User Group
  • Oracle Developer Tour Latin America
  • DOAG Konferenz + Ausstellung
  • UKOUG Technology Conference and Exhibition
  • High Five Polish Oracle User Group

But if you're in the APEX Community, or you want to join the passionate and growing global APEX community, please plan on attending an ODTUG Kscope conference.  Just once.  You'll thank me later.

Registration for Kscope18 is already open!


Green means GO! APEX sessions at Kscope.

APEX Sunday Symposium

APEX Open Mic Night

#LetsWreckThisTogether APEX Talks

(bash): The most useless commands (7)

Dietrich Schroff - Sat, 2017-07-01 03:14

After talking about
  1. rev
  2. sl
  3. cowsay
  4. cmatrix
here another pretty useless command:
calToday with every smartphone having its own calendar app or every e-mail program with integrated time planning function this command is no more needed. The manpage says:
NAME
     cal, ncal — displays a calendar and the date of EasterAnd here we go:
$ cal 2000
                            2000
       Januar               Februar                 März        
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1         1  2  3  4  5            1  2  3  4
 2  3  4  5  6  7  8   6  7  8  9 10 11 12   5  6  7  8  9 10 11
 9 10 11 12 13 14 15  13 14 15 16 17 18 19  12 13 14 15 16 17 18
16 17 18 19 20 21 22  20 21 22 23 24 25 26  19 20 21 22 23 24 25
23 24 25 26 27 28 29  27 28 29              26 27 28 29 30 31  
30 31                                                          
       April                  Mai                   Juni        
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1      1  2  3  4  5  6               1  2  3
 2  3  4  5  6  7  8   7  8  9 10 11 12 13   4  5  6  7  8  9 10
 9 10 11 12 13 14 15  14 15 16 17 18 19 20  11 12 13 14 15 16 17
16 17 18 19 20 21 22  21 22 23 24 25 26 27  18 19 20 21 22 23 24
23 24 25 26 27 28 29  28 29 30 31           25 26 27 28 29 30  
30                                                              
        Juli                 August              September      
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1         1  2  3  4  5                  1  2
 2  3  4  5  6  7  8   6  7  8  9 10 11 12   3  4  5  6  7  8  9
 9 10 11 12 13 14 15  13 14 15 16 17 18 19  10 11 12 13 14 15 16
16 17 18 19 20 21 22  20 21 22 23 24 25 26  17 18 19 20 21 22 23
23 24 25 26 27 28 29  27 28 29 30 31        24 25 26 27 28 29 30
30 31                                                          
      Oktober               November              Dezember      
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
 1  2  3  4  5  6  7            1  2  3  4                  1  2
 8  9 10 11 12 13 14   5  6  7  8  9 10 11   3  4  5  6  7  8  9
15 16 17 18 19 20 21  12 13 14 15 16 17 18  10 11 12 13 14 15 16
22 23 24 25 26 27 28  19 20 21 22 23 24 25  17 18 19 20 21 22 23
29 30 31              26 27 28 29 30        24 25 26 27 28 29 30
                                            31                    
 Or "the date of Easter":
$ ncal -e 2018
 1 April 2018

Pages

Subscribe to Oracle FAQ aggregator