Feed aggregator

10 reasons NOT to implement Blockchain now

Amis Blog - Tue, 2017-12-12 06:35

A secure distributed ledger with smart contract capabilities not requiring a bank as an intermediary! Also a single source of truth with complete traceability. Definitely something we want! Blockchain technology promises to make this possible. Blockchain became famous through cryptocurrency like Bitcoin and Ethereum. The technology could also be considered to replace B2B functionality. With new technologies it is not a bad idea to look at pro’s and con’s before starting an implementation. Blockchain is the new kid on the block and there is not much experience yet on how well he will play with others and will mature. In this blog I summarize some of my concerns concerning blockchain of which I hope will be solved in due time.

Regarding new/emerging technologies in the integration space, I’m quite open to investigate the potential value which they can offer. I’m a great proponent of for example Kafka, the highly scalable streaming platform and Docker to host microservices. However, I’ve been to several conferences and did some research online regarding blockchain and I’m sceptical. I definitely don’t claim to be an expert on this subject so please correct me if I’m wrong! Also, this is my personal opinion. It might deviate from my employers and customers views.

Most of the issues discussed here are valid for public blockchains. Private blockchains are of course more flexible since they can be managed by companies themselves. You can for example more easily migrate private blockchains to a new blockchain technology or fix issues with broken smart contracts. These do require management tooling, scripts and enough developers / operations people around your private blockchain though. I don’t think it is a deploy and go solution just yet.

1 Immutable is really immutable!

A pure public blockchain (not taking into account sidechains and off chain code) is an immutable chain. Every block uses a hashed value of the previous block in its encryption. You cannot alter a block which is already on the chain. This makes sure things you put on the chain cannot suddenly appear or disappear. There is traceability. Thus you cannot accidentally create money for example on a distributed ledger (unless you create immutable smart contracts to provide you with that functionality). Security and immutability are great things but they require you to work in a certain way we are not that used to yet. For example, you cannot cancel a confirmed transaction. You have to do a new transaction counteracting the effects of the previous one you want to cancel. If you have an unconfirmed transaction, you can ‘cancel’ it by creating a new transaction with the same inputs and a higher transaction fee (at least on a public blockchain). See for example here. Also if you put a smart contract on a public chain and it has a code flaw someone can abuse, you’re basically screwed. If the issue is big enough, public blockchains can fork (if ‘the community’ agrees). See for example the DAO hack on Etherium. In an enterprise environment with a private blockchain, you can fork the chain and replay the transactions after the issue you want corrected on the chain. This however needs to be performed for every serious enough issue and can be a time consuming operation. In this case it helps (in your private blockchain) if you have a ‘shadow administration’ of transactions. You do have to take into account however that transactions can have different results based on what has changed since the fork. Being careful here is probably required.

2 Smart Contracts

Smart contracts! It is really cool you can also put a contract on the chain. Execution of the contract can be verified by nodes on the chain which have permission and the contract is immutable. This is a cool feature!

However there are some challenges when implementing smart contracts. A lot becomes possible and this freedom creates sometimes unwanted side-effects.

CryptoKitties

You can lookup CryptoKitties, a game implemented by using Smart Contracts on Etherium. They can clog a public blockchain and cause transactions to take a really long time. This is not the first time blockchain congestion occurs (see for example here). This is a clear sign there are scalability issues, especially with public blockchains. When using private blockchains, these scalability issues are also likely to occur eventually if the number of transactions increases (of course you can prevent CryptoKitties on a private blockchain). The Bitcoin / VISA comparison is an often quoted one, although there is much discussion on the validity of the comparison.

Immutable software. HelloWorld forever!

Smart contracts are implemented in code and code contains bugs and those bugs, depending on the implementation, sometimes cannot be fixed since the code on the chain is immutable. Especially since blockchain is a new technology, many people will put buggy code on public blockchains and that code will remain there forever. If you create DAO‘s (decentralized autonomous organizations) on a blockchain, this becomes even more challenging since the codebase is larger. See for example the Etherium DAO hack.

Because the code is immutable, it will remain on the chain forever. Every hello world tryout, every CryptoKitten from everyone will remain there. Downloading the chain and becoming a node will thus become more difficult as the amount of code on the chain increases, which it undoubtedly will.

Business people creating smart contracts?

A smart contract might give the idea a business person or lawyer should be able to design/create them. If they can create deterministic error free contracts which will be on the blockchain forever, that is of course possible. It is a question though how realistic that is. It seems like a similar idea that business people could create business rules in business rule engines (‘citizen developers’). In my experience technical people need to do that in a controlled, tested manner.

3 There is no intermediary and no guarantees

There is no bank in between you and the (public) blockchain. This can be a good thing since a bank eats money. However in case of for example the blockchain loses popularity, steeply drops in value or has been hacked (compare with a bank going bankrupt, e.g. Icesave) than you won’t have any guarantees like for example the deposit guarantee schemes in the EU. Your money might be gone.

4 Updating the code of a blockchain

Updating the core code of a running blockchain is due to its distributed nature, quite the challenge. This often leads to forks. See for example Bitcoin forks like Bitcoin Cash and Bitcoin Gold and an Etherium fork like Byzantium. The issue with forks is that it makes the entire cryptocurrency landscape crowded. It is like Europe in the past when every country had their own coin. You have to exchange coins if you want to spend in a certain country (using the intermediaries everyone wants to avoid) or have a stack of each of them. Forks, especially hard forks come with security challenges such as replay attacks (transactions which can be valid on different chains). Some reasons you might want to update the code is because transactions are slow, security becomes an issue in the future (quantum computing) or new features are required (e.g. related to smart contracts).

5 Blockchain and privacy legislation (GDPR) Security

Security is one of the strong points of blockchain technology and helps with the security by design and by default GDPR requirements. There are some other things to think about though.

The right to be forgotten

Things put on a blockchain are permanent. You cannot delete them afterwards, although you might be able to make then inaccessible in certain cases. This conflicts with the GDPR right to be forgotten.

Data localization requirements

Every node has the entire blockchain and thus all the data. This might cause issues with legislation. For example requirements to have data contained within the same country. This becomes more of a challenge when running blockchain in a cloud environment. In Europe with many relatively small countries, this will be more of an issue compared to for example the US, Russia or China.

Blockchain in the cloud

It is really dependent on the types of services the blockchain cloud provider offers and how much they charge for it. It could be similar to using a bank, requiring you to pay per transaction. In that case, why not stick to a bank? Can you enforce the nodes being located in your country? If you need to fix a broken smart contract, will there be a service request and will the cloud provider fork and replay transactions for you? Will you get access to the blockchain itself? Will they provide a transaction manager? Will they guarantee a max transactions per second in their SLA? A lot of questions for which there are probably answers (which differ per provider) and based on those answers, you can make a cost calculation if it will be worthwhile to use the cloud blockchain. In the cloud, the challenges with being GDPR compliant are even greater (especially for European governments and banks).

6 Lost your private key?

If you have lost your private key or lost access to your wallet (more business friendly name of a keystore) containing your private key, you might have lost your assets on the blockchain. Luckily a blockchain is secure and there is no easy way to fix this. If you have a wallet which is being managed by a 3rd party, they might be able to help you with recovering it. Those 3rd parties however are hacked quite often (a lot of value can be obtained from such a hack). See for example here, here and here.

7 A blockchain transaction manager is required

A transaction is put on the blockchain. The transaction is usually verified by several several nodes before it is distributed to all nodes and becomes part of the chain. Verification can fail or might take a while. This can be hours on some public blockchains. It could be the transaction has been caught up by another transaction with higher priority. In the software which is integrated with a blockchain solution, you have to keep track on the state of transactions since you want to know what the up to date value is of your assets. This causes an integration challenge and you might have to introduce a product which has a blockchain transaction manager feature.

8 Resource inefficient; not good for the environment

Blockchain requires large amounts of resources when compared to classic integration.
Everyone node has the complete chain so everyone can verify transactions. This is a good thing since if a single node is hacked, other nodes will overrule the transactions which this node offers to the chain if they are invalid in whatever way. However this means every transaction is distributed to all nodes (network traffic) and every verification is performed on every node (CPU). Also when the chain becomes larger, every node has a complete copy and thus diskspace is not used efficiently. See for example some research on blockchain electricity usage here. Another example is that a single Bitcoin transaction (4 can be processed per second) requires the same amount of electricity as 5000 VISA transactions (while VISA can do 4000 transactions per second, see here). Of course there is discussion on the validity of such a comparison and in the future this will most likely change. Also an indication blockchains are still in the early stages.

9 Maturity

Blockchain is relatively new and new implementations appear almost daily. There is little standardisation. The below picture was taken from a slide at the UKOUG Apps17 conference in Birmingham this year (the presentation was given by David Haimes).

Even with this many (partially open source) products, it seems every implementation requires a new product. For example the Estonian government has implemented their own blockchain flavor; KSI Blockchain. It is likely that eventually there will be a most common used product which will hopefully be the one that works best (not like what happened in the videotape format wars).

Which product?

If you choose a product now to implement, you will most likely not choose the product which will be most popular in a couple of years time. Improvements to the technology/products will quite quickly catch up to you. This will probably mean you would have to start migration projects.

Standards?

For example in case of webservice integrations, there are many internationally acknowledged standards such as the WS-* standards and SOAP. For REST services there is JOSE, JWT and of course JSON. What is there for blockchain? Every product uses its own protocol. Like in the times when Microsoft conjured up its own HTML/JavaScript standards causing cross browser compatibility issues. Only this time there are hundreds of Microsofts.

10 Quantum computing

Most of the blockchain implementations are based on ECDSA signatures. Elliptic curve cryptography is vulnerable to a modified Shor’s algorithm for solving the discrete logarithm problem on elliptic curves. This potentially makes it possible to obtain a user’s private key from their public key when performing a transaction (see here and here). Of course this will be fixed, but how? By forking the public blockchains? By introducing new blockchains? As indicated before, updating the technology of a blockchain can be challenging.

How to deal with these challenges?

You can jump on the wagon and hope the ride will not carry you off a cliff. I would be a bit careful when implementing blockchain. I would not expect in an enterprise to quickly get something to production which will actually be worthwhile in use without requiring a lot of expertise to work on all the challenges.

Companies will gain experience with this technology and architectures which mitigate these challenges will undoubtedly emerge. A new development could also be that the base assumptions the blockchain technology is based on, are not practical in enterprise environments and another technology arises to fill the gap.

Alternatives Not really

To be honest, a solid alternative which covers all the use cases of blockchain is not easily found. This might also help in explaining the popularity of blockchain. Although there are many technical challenges, in absence of a solid alternative, where should you go to implement those use cases?

SWIFT?

Exchanging value internationally has been done by using the SWIFT network (usually by using a B2B application to provide a bridge). This however often requires manual interventions (at least in my experience) and there are security considerations. SWIFT has been hacked for example.

Kafka?

The idea of having a consortium which guards a shared truth has been around for quite a while in the B2B world. The technology such a consortium uses can just as well for example be a collection of Kafka topics. It would require a per use-case study if all the blockchain features can be implemented. It will perform way better, the order of messages (like in a blockchain) can be guaranteed, a message put on a topic is immutable and you can use compacted topics to get the latest value of something. Kafka has been designed to be easily scalable. It might be worthwhile to explore if you can create a blockchain like setup which does not have its drawbacks.

Off-chain transactions and sidechains

Some blockchain issues can be mitigated by using so-called off-chain transactions and code. See for example here. Sidechains are extensions to existing blockchains, enhancing their privacy and functionality by adding features like smart contracts and confidential transactions.

Finally

It might not seem like it, from the above, but I’m excited about this new technology. Currently however in my opinion, it is still immature. It lacks standardization, performance and the immutable nature of a blockchain might be difficult to deal with in an actual enterprise. With this blog I tried to create some awareness on things you might think about when considering an implementation. Currently implementations (still?) require much consultancy on the development and operations side, to make it work. This is of course a good thing in my line of work!

The post 10 reasons NOT to implement Blockchain now appeared first on AMIS Oracle and Java Blog.

ORA-00240: control file enqueue held for more than 120 seconds 12c

Pakistan's First Oracle Blog - Mon, 2017-12-11 22:30
Occasionally, In Oracle 12c database, you may get ORA-00240: control file enqueue held for more than 120 seconds  error in the alert log file.


Well, as this error contains the word control file so it looks scary but if you are getting it infrequently and the instance stays up and running then there is no need to worry and this can be ignored as a fleeting glitch.

But if it starts happening too often and worst case if it hangs or crashes the instance then more than likely its  a bug and you need to apply either the latest PSU or any one-off patch available from Oracle support after raising SR.

There have been some occurances where this error occured due to high number of sessions and conflicting with the ulimit of the operating system resulting in hanging of ASM and RDBMS. Some time it could be due to shared pool latch contention as per few MOS documents.

So if its rare, then ignore it and move on with the life as there are other plenty of things to worry about. If its frequent and a show-stopper then by all means raise a SEV-1 SR with Oracle support as soon as possible.
Categories: DBA Blogs

Using an "On Field Value Changes" Event in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Mon, 2017-12-11 16:51

This entry is based on previous entries from John and Shray that deal with the same topic and provide the same type of solution. John's entry was created before VBCS provided the UI id for components, and Shray's entry is dealing with a more complex scenario that also involve fetching new data. So I figured I'll write my version here - mostly for my own future reference if I'll need to do this again.

The Goal is to show how you can modify the UI shown in a VBCS page in response to data changes in fields. For example how to hide or show a field based on the value of another field.

To do this, you need to hook into the HTML lifecycle of your VBCS page and subscribe to events in the UI. Then you code the changes you want to happen. Your gateway into manipulating/extending the HTML lifecycle in VBCS is the custom component available in the VBCS component palette. It provides a way to add your own HTML+JavaScript into an existing page.

The video below shows you the process (along with a couple of small mistakes along the route):

The basic steps to follow:

Find out the IDs of the business object field whose value changes you want to listen to. You'll also need to know the IDs of the UI component you want to manipulate - this is shown as the last piece of info in the property inspector when you click on a component. 

Once you have those you'll add a custom component into your page, and look up the observable that relates to the business object used in the page. This can be picked up from the "Generated Page Model (read-only)" section of the custom component and it will look something like : EmpEntityDetailArchetype

Next you are going to add a listener to your custom component model. Add it after the lines 

//the page view model this.pageViewModel = params.root;

your code would look similar to this:

this._listener = this.pageViewModel.Observables.EmpEntityDetailArchetype.item.ref2Job.currentIDSingle.subscribe(function (value) { if (value === "2") { $("#pair-currency-32717").show(); } else { $("#pair-currency-32717").hide(); } }); CustomComponentViewModel.prototype.dispose = function () { this._listener.dispose(); };

Where you will replace the following:

  • EmpEntityDetailArchetype  should be replaced with the observable for your page model.
  • ref2Job  should be replaced with the id of the column in the business object whose value you are monitoring.
  • pair-currency-32717 should be replaced with the id of the UI component you want to modify. (in our case show/hide the component).

You can of course do more than just show/hide a field with this approach.

Categories: Development

TensorFlow Linear Regression Model Access with Custom REST API using Flask

Andrejus Baranovski - Mon, 2017-12-11 13:28
In my previous post - TensorFlow - Getting Started with Docker Container and Jupyter Notebook I have described basics about how to install and run TensorFlow using Docker. Today I will describe how to give access to the machine learning model from outside of TensorFlow with REST. This is particularly useful while building JS UIs on top of TensorFlow (for example with Oracle JET).

TensorFlow supports multiple languages, but most common one is Python. I have implemented linear regression model using Python and now would like to give access to this model from the outside. For this reason I'm going to use Flask, micro-framework for Python to allow simple REST annotations directly in Python code.

To install Flask, enter into TensorFlow container:

docker exec -it RedSamuraiTensorFlowUI bash

Run Flask install:

pip install flask

Run Flask CORS install:

pip install -U flask-cors

I'm going to call REST from JS, this means TensorFlow should support CORS, otherwise request will be blocked. No worries, we can import Flask CORS support.

REST is enabled for TensorFlow model with Flask in these two lines of code:


As soon as Flask is imported and enabled we can annotate a method with REST operations and endpoint URL:


There is option to check what kind of REST operation is executed and read input parameters from POST request. This is useful to control model learning steps, for example:


If we want to collect all x/y value pairs and return in REST response, we can do that by collecting values into array:


And construct JSON response directly out of the array structure using Flask jsonify:


After we run TensorFlow model in Jupyter, it will print URL endpoint for REST service. For URL to be accessible outside TensorFlow Docker container, make sure to run TensorFlow model with 0.0.0.0 as in the screenshot below:


Here is example of TensorFlow model REST call from Postman. POST operation is executed payload and response:


All REST calls are logged in TensorFlow:


Download TensorFlow model enabled with REST from my GitHub.

Top 12 Rank Tracking Software and services

Nilesh Jethwa - Mon, 2017-12-11 13:23

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically designed to look for rank and popularity.

Rank tracking software gauges the performance of a business. The use of this rank tracker tool allows one to evaluate and track ranks in search engines as well as give them the range of their visibility to the prospect market. This can also observe the progress of competitors in the market. This knowledge can act as their edge and chance for better improvement.

Monitoring business using keyword rank checker can offer numerous benefits to a business enterprise, but before it can happen, you must first find the suitable rank tracker tool for you.

Here are a list of the top rank tracking software and services

  1. ADVANCE WEB RANKING (AWR)

Initially founded in 2002, this company offers an extensive variety of different keyword tracking tools. It prides itself on providing worry-free reporting that allows a user to focus on website optimization rather than tracking.  It also has cloud service and desktop software.

Features and benefits of this rank tracker:

  • Allows white label reporting
  • It offers unlimited observation of campaigns and websites competition
  • Allows easy assignment of read/write permissions
  • This keyword rank checker has an international customer base that supports special characters
  • Simple project setup and easy sharing of data online
  • User-intuitive interface and can be accessed by mobile devices
  • Because of its custom google location, it aren support the international markets and gets results from over 50+ countries.

What makes it better?

Compared to its competitors, Advanced Web Ranking can give localized ranking with pinpoint accuracy. Geolocation proxies in the company system is the reason for this better accuracy. It also offers a 30-day trial for free.

But what’s the price?

The starting least priced option is $49 per month and the highest value is at $499. The price is a bit costly, but with its given features, it is understandable.

 

Read more at https://www.infocaptor.com/dashboard/top-12-rank-tracking-software-and-services-for-your-business-needs

Secure Configuration Guidelines for Oracle E-Business Suite 12.2 and 12.1

Steven Chan - Mon, 2017-12-11 11:32

We've been providing Oracle E-Business Suite secure configuration guidelines or best practices in our published MOS Notes and guides for some time now. Our secure configuration deployment guidelines include the following recommendations:

Related Articles

References

Categories: APPS Blogs

DBMS_COMPRESSION can be run in parallel, at least from 11.2.0.4

Jeff Moss - Mon, 2017-12-11 10:12

I was trying to use DBMS_COMPRESSION on an 11gR2 (11.2.0.4 on RHEL 6.3) database the other day and was helped by this article from Neil Johnson. I was having some trouble with permissions until I read that article which nicely summarises everything you need to know – thanks Neil!

One thing I did notice is that Neil stated that you can’t parallelise the calls to the advisor since it uses the same named objects each time and this would then cause conflicts (and problems). Neil illustrated the example calls that the advisor is making based on him tracing the sessions…

create table "ACME".DBMS_TABCOMP_TEMP_UNCMP tablespace "SCRATCH" nologging
 as select /*+ DYNAMIC_SAMPLING(0) FULL("ACME"."ACCS") */ *
 from "ACME"."ACCS" sample block( 99) mytab
 
create table "ACME".DBMS_TABCOMP_TEMP_CMP organization heap 
 tablespace "SCRATCH" compress for all operations nologging
 as select /*+ DYNAMIC_SAMPLING(0) */ *
 from "ACME".DBMS_TABCOMP_TEMP_UNCMP mytab

Because I kept having permissions issues I was repeatedly running the advisor and I ended up with a situation where one of the transient objects (above, or so I thought) had been left in place and when I tried the next rerun it complained that the object existed. I can’t reproduce this as I can’t remember all the steps that I took and I wasn’t recording my session at the time – it’s not really the point of this blog in any case, rather the knowledge it led to. Because the error was that the object existed, I figured I just needed to find the object and drop it and I’d be good to carry on – obviously I looked at the above code fragments and started to search for the two objects in question (DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP) but found nothing. I started looking for DBMS_TABCOMP% and again found nothing.

Somewhat confused, I then looked for the latest object created and found that the objects were actually called something completely different and of the form CMPx$yyyyyyyy. I think this must have changed since Neil wrote his article (it is from 2013 after all).

I can’t work out what “x” is – at first I thought it was the RAC instance but that was just a coincidence that I saw a 3 and I was on instance 3 of a RAC cluster. In fact on a single instance database (test below) I saw numbers higher than 1 so it’s not the RAC instance number and I can’t work out what it is. “yyyyyyyy” is definitely the OBJECT_ID of the data object, confirmed by cross referencing the data dictionary.

Given this naming standard is therefore object specific, it suggests that you could execute these advisor calls in parallel.

Just to be clear, I’m not sure what version of 11g Neil was using but I am using 11.2.0.4:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

A little test using Neil’s code (and a bit more as I don’t have acme user on my database):

First create a script called dbms_comp.sql with the following content:

set serveroutput on
set feedback on
set verify off
 
declare
 blkcnt_cmp BINARY_integer;
 blkcnt_uncmp BINARY_integer;
 row_cmp BINARY_integer;
 row_uncmp BINARY_integer;
 cmp_ratio number;
 comptype_str varchar2(60);
begin
 dbms_compression.get_compression_ratio(
 scratchtbsname => upper('&3.')
 , ownname => upper('&1.')
 , tabname => upper('&2.')
 , partname => null
 , comptype => dbms_compression.comp_for_oltp
 , blkcnt_cmp => blkcnt_cmp
 , blkcnt_uncmp => blkcnt_uncmp
 , row_cmp => row_cmp
 , row_uncmp => row_uncmp
 , cmp_ratio => cmp_ratio
 , comptype_str => comptype_str
 , subset_numrows => &4.
 );
 DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
 DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
 --DBMS_OUTPUT.PUT_LINE('Compression type = ' ||comptype_str);
 DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,1)||' to 1');
 DBMS_OUTPUT.PUT_LINE('Compression % benefit = '||round((blkcnt_uncmp-blkcnt_cmp)/blkcnt_uncmp*100,1));
 --DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
end;
/
set verify on

Then create another script called setup.sql with the following content – I’m using auditing (thanks Tim!) to see the statements rather than tracing in this instance:

conn sys as sysdba
drop user acme cascade;
drop user nj cascade;
drop tablespace acme including contents and datafiles;
drop tablespace scratch including contents and datafiles;
drop role nj_dba;
create user acme identified by acme;
grant create session,create table to acme;
create tablespace acme datafile '/u01/app/oracle/oradata/db11g/acme01.dbf' size 2G;
alter user acme quota unlimited on acme;
create tablespace scratch datafile '/u01/app/oracle/oradata/db11g/scratch01.dbf' size 2G;
create role nj_dba;
create user nj identified by nj;
REM Use auditing instead of tracing to identify the statements run:
audit all by nj by access;
audit create table by nj by access;
grant create session, create any table, drop any table, select any table to nj_dba;
grant execute on sys.dbms_monitor to nj_dba;
grant nj_dba to nj;
alter user acme quota unlimited on scratch;
alter user nj quota unlimited on scratch;
grant ANALYZE ANY to NJ_DBA;

Now login to sqlplus /nolog and run setup.sql which should show this:

SQL> @setup
Enter password:
Connected.

User dropped.



User dropped.



Tablespace dropped.



Tablespace dropped.



Role dropped.



User created.



Grant succeeded.



Tablespace created.



User altered.



Tablespace created.



Role created.



User created.



Audit succeeded.



Audit succeeded.



Grant succeeded.



Grant succeeded.



Grant succeeded.



User altered.



User altered.



Grant succeeded.

 

Now login to acme and create the subject table for the compression advisor:

conn acme/acme
create table test tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now check the compression using the advisor (ignore the actual compression results as we’re not interested in those at this time):

conn nj/nj
@dbms_comp acme test scratch 200000
Block count compressed = 2048
Block count uncompressed = 2048
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Now check the audit trail to find the statements run:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';

USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518

(Abridged to remove non relevant tests)

Now check the dictionary to see the OBJECT_ID:

select object_name from dba_objects where object_id=87401;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST

1 row selected.

OK, how about the parallelism? Let’s create a second table called TEST2 in ACME:

conn acme/acme
create table test2 tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now run two parallel sessions – I did it by firing off the calls manually in separate SQL*Plus sessions rather than being clever:

In session 1:

conn nj/nj
@dbms_comp acme test scratch 200000

In session 2:

conn nj/nj
@dbms_comp acme test2 scratch 200000

 

First one gives:

Block count compressed = 1920
Block count uncompressed = 1920
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Second one gives:

Block count compressed = 2432
Block count uncompressed = 2432
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Both ran at the same time and didn’t fail

Now check the audit trail:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';
USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518 
NJ CREATE TABLE CMP3$87408 10-DEC-2017 18:12:18.114321
NJ CREATE TABLE CMP3$87409 10-DEC-2017 18:12:18.114353
NJ CREATE TABLE CMP4$87408 10-DEC-2017 18:12:22.730715
NJ CREATE TABLE CMP4$87409 10-DEC-2017 18:12:22.735908
(Abridged to remove non relevant tests)

And from the dictionary:

select object_name from dba_objects where object_id IN(87408,87409);

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST
TEST2

2 rows selected.

So, it appears to allow parallel running without issue.

If anyone works out what the “x” part of the object names is (3 and 4 in the above example), please shout out in the comments…

Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats

Oracle Press Releases - Mon, 2017-12-11 07:00
Press Release
Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats New Identity Governance, A.I.-based Configuration Management, and Consumer Identity Management are features designed to help organizations protect their clouds and digital services

Redwood Shores, Calif.—Dec 11, 2017

Oracle today announced that it is expanding its security portfolio and unveiling a new partner program. The Oracle Identity SOC portfolio now includes new capabilities to help enterprises manage and certify user identities, applications, and confidential data more securely and through a richer, consumerized user experience. Additionally, the new partner program will help improve collaboration with security vendors and simplify customer adoption.

First unveiled at Oracle OpenWorld 2017, Oracle’s integrated suites—Oracle Identity Security Operations Center (SOC) portfolio of services and Oracle Management Cloud—are designed to help enterprises forecast, reduce, detect, and resolve security threats and assist in efforts to remediate application and infrastructure performance issues.  Leveraging artificial intelligence to analyze a unified data set consisting of the full breadth of security and operational telemetry, as well as provide automated remediation, Oracle’s integrated suite is designed to enable customers to quickly adapt their security and operational posture as their risk landscape changes. This application of machine learning can potentially help thwart attacks, reduce the detection window from months to minutes, and more quickly address security breaches and performance outages.

“Built on our advanced artificial intelligence and automation technologies, Oracle Identity SOC portfolio now offers cloud-native identity governance combined with adaptive security designed to protect our customer’s cloud and digital services,” said Rohit Gupta, group vice president, Cloud Security, Oracle. “In addition to strengthening internal security monitoring, Oracle’s comprehensive cross-stack threat detection capabilities provide broad threat detection range, less noise from false positives and a high level of accuracy in detecting and remediating threats against the enterprise’s most important IT asset–its data.”

The Oracle Identity SOC portfolio now includes new automated identity governance for hybrid clouds, expanded consumer identity management, machine learning-driven configuration management and enhanced artificial intelligence capabilities.

First Cloud-Native Identity Governance Offering

Oracle announced the first cloud-native identity governance service for hybrid cloud environments, which will be fully integrated and native to Oracle’s SaaS applications, Oracle Identity SOC portfolio (including Oracle Identity Cloud Service and Oracle CASB Cloud Service), as well as Oracle Management Cloud. This combination helps provide a better user experience for identity governance and automation of traditional processes and assessments through intelligent machine learning and cloud application risk feeds from Oracle CASB Cloud Service, all of which can be unified with security and operational telemetry from on-premises and hybrid environments. The new identity governance service features out-of-the-box connectors for integration with external systems and a consumer-grade interface for enterprise campaigns including application access requests, approvals and certifications.

Oracle CASB Cloud Service now also includes risk-based cloud access control to help enterprises mitigate risks to cloud applications. The new controls help manage configuration changes or application access based on rich context including device type, geo-location, and dynamic risk scores. Combined, these factors are designed to help prevent user access based on stolen credentials and mitigate the risk of cloud application administrator privileges misuse.

Industry-First Machine Learning-Driven Configuration Management

Additionally, Oracle introduced a game-changing feature in Oracle Configuration and Compliance Cloud Service, built on Oracle Management Cloud, which automatically discovers the configuration settings across a customer’s entire organization in real time and uses machine learning to find outlier configurations, which can then be automatically remediated.  In addition, Oracle Management Cloud now supports rulesets to enforce DISA’s Security Technical Implementation Guide (STIG) against the Oracle Database.  Oracle has also now extended its user and entity behavior analytics (UEBA) across the entire stack.  Specifically, Oracle Security Monitoring and Analytics (SMA) Cloud Service continuously baselines data access patterns at the SQL level and then detects anomalies by user, database or application.

Expanded Consumer Identity Management

Recognizing the need for enterprises to correlate consumer data with marketing information, Oracle expanded its consumer identity management capabilities in Oracle Identity Cloud Service with integrations with Oracle Marketing Cloud and Oracle Data Cloud. By leveraging built-in consent management, social profiles, preference management and activity attributes from Oracle Identity Cloud Service with these solutions, enterprises can build more targeted marketing campaigns with better consumer insights and analytics. In addition, easy integration with third-party or custom applications via Oracle Self-Service Integration Cloud Service will allow marketers to incorporate additional services.

New Partner Programs Help Deliver Complete Security Solutions

Continuing its commitment to supporting the development of a heterogeneous security ecosystem, Oracle today launched the Identity SOC Security Network in Oracle Cloud Marketplace. This program offers technology integrations with leading security vendors across critical areas including Enterprise Mobility Management, Next Generation Firewalls, Endpoint Security and Threat Intelligence, enabling enterprises to enrich the dynamic context, intelligence and automation capabilities of Oracle Identity SOC. In addition, Oracle continues to enable hundreds of partners worldwide to achieve Oracle PartnerNetwork Specializations in Security and Systems Management Cloud Services.

"We're incredibly excited to work with the Oracle Identity SOC portfolio," said Prakash Linga, CTO and co-founder of Vera. "Identity is the single most critical element of all modern applications and systems. Enhancing Oracle’s portfolio with Vera’s data-centric security and encryption solution makes Oracle’s services incredibly compelling for businesses.”

“We are excited to work with Oracle to introduce Compromised Credentials as a new category in Oracle CASB Cloud Service for refining organization and identity-centric risk assessment,” said Steve Tout, CEO of Seattle based VeriClouds. “With VeriClouds-powered credential monitoring and detection, more than 7 billion compromised credentials are surfaced to enable the automation of risk insight from the dark web into actionable intelligence for the Oracle Identity SOC.”

“Our digital workspace platform, VMware Workspace ONE powered by AirWatch unified endpoint management technology, helps IT secure apps based on conditional access policies and provides a seamless end user experience for any app from any device,” said Ashish Jain, vice president of product management, End-User Computing, VMware. “Workspace ONE and Oracle’s security solutions enables us to strengthen application security and provide data-driven insights to help mutual customers make better informed decisions related to device and app management.”

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Kristin Reeves

  • +1.415.856.5145

OBIEE 12c (12.2.1.3) post upgrade challenges - unable to save HTML based dashboard reports

Syed Jaffar - Mon, 2017-12-11 03:18
Very recently, an OBIEE v12.2.1.1 upgraded to v12.2.1.3 on an Exalytics machine. Though it wasn't a major upgrade, we had to do it to fix some bugs encountered in v12.2.1.1. For sure, it wasn't a easy walk in the park during the upgrade and post upgrade.

I am here to discuss a particular issue we faced post upgrade. This should be interesting.

Post upgrade to v12.2.1.3, we heard from the application team that all the reports working perfectly fine, except the dashboard reports with HTML tag unable to save. The below error was thrown when try to save the existing HTML reports:

"You do not currently have sufficient privileges to save a report or dashboard page that contains HTML markup"

It appeared to be a very generic error, and we tried all the workarounds we found on various websites and Oracle docs. Unfortunately, none of the solution fixed the issues. One of the popular advice is the following:

you need to go and check the HTML markup privileges in Administration.


1. Login with OBIEE Admin user,
2. Go to Administration link - Manage Privileges,
3. Search for "Save Content with HTML Markup" and see if it has necessary privileges. else assign the necessary roles and re-check the issue. 


After giving an additional privilege we could save the existing HTML reports. However, still the issue exists when we create a new dashboard report, with HTML tag. This time the below error msg was appeared:



While doing the research we also opened a SR with Oracle support. Luckily, (I mean it), the engineer who assigned this SR was good enough to catch the issue and provide the solution.

The engineer tried to simulate the issue on his 12.2.1.3 environment and surprisingly he faced the similar problem. He then contacted the OBIEE development team and reaised the above concerns.

A few hours later, he provided us the below workaround, which seems to be a generic solution to the first and second problem we faced. I believe this is a common issue on 12.2.1.3.


Please follow below steps ::

Take a backup of Instanceconfig.xml file, which is located in the following location, 

<obiee_home>/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml 

2. Add the following entry under security. 

<Security> 

<CheckUrlFreshness>false</CheckUrlFreshness> 
<EnableSavingContentWithHTML>true</EnableSavingContentWithHTML> 

</Security> 

3. Restart the presentation services with command below :: 

cd /refresh/home/oracle/12c/Oracle_Home/Oracle_Config/bi/bitools/bin

./stop.sh -i obips1 

./start.sh -i obips1 

4. Now you will see the "Contains HTML MARK up" in the answers , check it and try to save the report now. 

The solution perfectly worked and all HTML dashboard reports were able to save.

If you are on OBIEE 12.2.1.3, I strongly recommend you to test the HTML reports, and if you encounter one similar to ours, apply the workaround.



Updated Whitepapers for 4.3.0.5.0

Anthony Shorten - Sun, 2017-12-10 18:49

With the anticipated releases of the first products based upon Oracle Utilities Application Framework V4.3.0.5.0 starting to appear soon, the first set of whitepapers have to been updated to reflect new functionality, updated functionality and experiences from the field and our Oracle Utilities cloud implementations.

The following whitepapers have been updated and are now available from My Oracle Support:

  • ConfigTools Best Practices (Doc Id: 1929040.1) - This has been updated with the latest advice from our implementations and cloud teams. There are a few new sections around Groovy and a new section which highlights the ability to write batch programs using the Plug-In Batch architecture. In Oracle Utilities Application Framework 4.3.0.5.0, we add the capability to implement File Import functionality using Groovy in our Plug-In Batch. We provide a mechanism to support Delimited, Fixed or XML based files within the algorithms. Samples of each are supplied in the product.
  • Identity Management Suite Integration (Doc Id: 1375600.1) - This whitepaper has been greatly simplified to reflect the latest Oracle Identity Management Suite changes and the newer interface that has been migrated from XAI to IWS. The new interface as two new algorithms which are used in our cloud implementations and are now part of the F1-IDMUser object supplied with the product.
    • Generation of Authorization Identifier - The F1-IDMUser object now supports the ability to generate the unique authorization identifier (the 8 character one) if the identifier is not provisioned from Oracle Identity Manager itself. This provides some flexibility of where this identifier can be provisioned as part of the Oracle Identity Manager solution. In the past the only place this was available was within Oracle Identity Manager itself. This enhancement means that the user can be provisioned from Oracle Identity Manager or part of the Identity Management interface to Oracle Utilities Application Framework.
    • Duplication of User now supported within interface - In past releases the use of template users was a common way of quickly provisioning users. This release also allows the duplication function within the User Object to be used in isolation or in conjunction with template users for more flexible options in provisioning. If this method is used, a characteristic is added to the duplicated user to indicate it was duplicated from another user (for auditing purposes).

As we get closer to release of products using Oracle Utilities Application Framework 4.3.0.5.0 you will see more and more updated whitepapers to reflect the new and improved changes in the releases.

Attracting Visitors To Your Website

Nilesh Jethwa - Sun, 2017-12-10 15:28

It’s not that easy to go from zero visitors to thousands of potential customers in an instant. But if you implement the right traffic-generating strategy, you can increase the number of visitors coming in to your website. If you can get enough traffic, that means you can generate sales. And you can check the main components in your salesto get more traffic and keep it coming.

Getting Instant and Cheap Traffic

The key components in the sales process that you need to test are opt-in offer, navigation, order form, and sales copy. You need to test them first before implementing your large-scale traffic generating campaign. Why is it important? Because this will help you prevent losing huge money once the campaign fails. Here are some of the best ways to attract visitors to your site:

  • Test your site. There are actually plenty of components to test in your website. That includes your web design, content, and many others. However, stick to what’s more important when testing your website. The basics are your salescopy, order process, opt-in offer, and site navigation. Later on, you can test other components once your sales generation is stable. 
  • Use Yahoo! Search Marketing. Pay-per-click search engines offer cheap and instant qualified traffic. But you need to come up with some targeted keywords in order to get results. Bidding to appear on top listings gives visibility on search results like Yahoo, MSN, and others. This will help you reach 80% of internet users across the globe. 
  • Offer free content such as eBooks. Providing irresistible free resources in exchange of priceless publicity is a common practice among digital marketers today. For instance, a valuable eBook or well-written blog posts can generate loads of web traffic without spending a dime. Just don’t use sales pitch in writing them so you won’t shoo potential customers away.

Read more at https://www.infocaptor.com/dashboard/how-to-attract-visitors-to-your-site

Docker-Swarm: Running a minimal webserver in a swarm

Dietrich Schroff - Sun, 2017-12-10 12:32
In my last posting to docker swarm i created a swarm on virtualbox with alpine linux with a hdd footprint of 290MB per node:
There are some tutorials out there with running a nginx or a java webserver in a container but >100MB for each node seems far to much for my tests.

So i decided to create a application which listens on port 8080 with netcat. I created a directory ncweb with ncweb.sh:
ncweb# cat ncweb.sh
#!/bin/bash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080; done 2&>1 logfileand a index.html:
ncweb# cat index.html
<html>
  <head>
    <title>"Hello, World"</title>
  </head>
  <body bgcolor=white>
    <table border="0" cellpadding="10">
      <tr>
        <td>
          <h1>"Hello, World"</h1>
        </td>
      </tr>
    </table>
  </body>
</html>The Dockerfile looks like this:
# cat Dockerfile
FROM alpine
WORKDIR /tmp
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]After that i created the container:
ncweb# docker build -t ncweb:0.2 .
Sending build context to Docker daemon  4.096kB
Step 1/5 : FROM alpine
 ---> 053cde6e8953
Step 2/5 : WORKDIR /tmp
 ---> Using cache
 ---> c3e11ac3773b
Step 3/5 : RUN mkdir ncweb
 ---> Using cache
 ---> d9e634c03cd1
Step 4/5 : ADD .  /tmp
 ---> 95f022aacc1c
Step 5/5 : ENTRYPOINT [ "/tmp/ncweb.sh" ]
 ---> Running in c1a9e8cee248
 ---> 6880521f68e4
Removing intermediate container c1a9e8cee248
Successfully built 6880521f68e4
Successfully tagged ncweb:0.2And let's do a test without docker swarm:
ncweb# docker run -p 8080:8080  ncweb
But running this as a service fails:
# docker service create --replicas=1 --name myweb ncweb:0.2
image ncweb:0.2 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.2 independently,
possibly leading to different nodes running different
versions of the image.
n0himwum38bqzd8ob1vf8zhip
overall progress: 0 out of 1 tasks
1/1: No such image: ncweb:0.2
^COperation continuing in background.
Use `docker service ps n0himwum38bqzd8ob1vf8zhip` to check progress.and:
# docker service ps n0himwum38bqzd8ob1vf8zhip
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR                        PORTS
8tjsuae9jv8o        myweb.1             ncweb:0.2           node01              Ready               Rejected 3 seconds ago        "No such image: ncweb:0.2"  
qp24ssxb5bl5         \_ myweb.1         ncweb:0.2           alpine              Shutdown            Failed 36 seconds ago         "task: non-zero exit (2)"   
zwfgcatk7zyi         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
v4a7zkb85yd4         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
ycjftjusv484         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
 
# docker service rm n0himwum38bqzd8ob1vf8zhip
n0himwum38bqzd8ob1vf8zhip
The error "No such image..." is happening, because the container ncweb is only in the repository of my master.
The easiest way for my test environment is to distribute the local image to all nodes:
# docker save ncweb:0.3 | ssh 192.168.178.47 docker load
The authenticity of host '192.168.178.47 (192.168.178.47)' can't be established.
ECDSA key fingerprint is SHA256:2/8O/SE1fGJ4f5bAQls5txrKMbqZfMmiZ+Tha/WFKxA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.178.47' (ECDSA) to the list of known hosts.
root@192.168.178.47's password:
Loaded image: ncweb:0.3(i have to distribute a ssh-key to all nodes)

and then:
alpine:~/ncweb# docker service create --replicas=1 --name myweb ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.# docker service ps myweb
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
in97xlc7azcw        myweb.1             ncweb:0.3           node01              Running             Running 8 seconds ago      
So my nc-webserver runs on node01, but i can not access it there because i did not define any port mappings ;-(

But finally this command did the job:
# docker service create --replicas=1 --name myweb --publish 8080:8080  ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.
runf8u9r8719sk13mkf8hh8ec
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
The hostname corresponds to the docker container id on node01:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
6c9434b08082        ncweb:0.3           "/tmp/ncweb.sh"     37 minutes ago      Up 37 minutes                           myweb.1.lqiyb34cuxxme2141ahsg8neu

Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to get a loadbalancing mechanism for a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?





Domain Indexes -- 1 : CONTEXT Indexes

Hemant K Chitale - Sun, 2017-12-10 03:17
A CONTEXT Index is one of a class of Domain Indexes.  It is used to build text-retrieval application.
Instead of a regular B-Tree index that captures the entire text of a VARCHAR2 column as a key, you can build an index that consists of "Tokens", words extracted from a long-ish text in the database column.  Searching the index is based on the CONTAINS operator.

Here is a quick demo in 11.2.0.4 :

First I setup a new user for this demo.  I could have used an existing user but my "HEMANT" user has DBA privileges and I want to demonstrate CONTEXT without such privileges.
Here the key grant is "CTXAPP" that is granted to the user.

SQL> create user ctxuser identified by ctxuser default tablespace users;

User created.

SQL> grant create session, create table to ctxuser;

Grant succeeded.

SQL> grant ctxapp to ctxuser;

Grant succeeded.

SQL> alter user ctxuser quota unlimited on users;

User altered.

SQL>


Next, this user creates the demonstration table and index :

SQL> connect ctxuser/ctxuser
Connected.
SQL> create table my_text_table
2 (id_column number primary key,
3 my_text varchar2(2000));

Table created.

SQL> create index my_text_index
2 on my_text_table(my_text)
3 indextype is ctxsys.context;

Index created.

SQL>
SQL> insert into my_text_table
2 values (1,'This is a long piece of text written by Hemant');

1 row created.

SQL> insert into my_text_table
2 values (2,'Another long text to be captured by the index');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> -- this update to the index would be a regular background job
SQL> exec ctx_ddl.sync_index('MY_TEXT_INDEX');

PL/SQL procedure successfully completed.

SQL>


Note the call to CTX_DDL.SYNC_INDEX.  Unlike a regular B-Tree index, the CONTEXT (Domain) Index is, by default, *not* updated in real-time when DML occurs against the base table.  (Hopefully, in a future post, I may demonstrate real-time updates to a CTXCAT index, different from a CONTEXT Index).
Only remember that this CONTEXT index is not automatically updated.  If new rows are added to the table, they are not visible through the index until a SYNC_INDEX operation is performed.  The SYNC_INDEX can be scheduled as a job with another call to DBMS_JOB or DBMS_SCHEDULER or itself as part of the CREATE INDEX statement.

Now, let me demonstrate usage of the Index with the CONTAINS operator.

SQL> select id_column as id,
2 my_text
3 from my_text_table
4 where contains (my_text, 'written by Hemant') > 0
5 /

ID
----------
MY_TEXT
--------------------------------------------------------------------------------
1
This is a long piece of text written by Hemant


SQL> select my_text
2 from my_text_table
3 where contains (my_text, 'Another long') > 0
4 /

MY_TEXT
--------------------------------------------------------------------------------
Another long text to be captured by the index

SQL>


Yes, the CONTAINS operator is a bit awkward.  It will be some time before you (or your developers) get used to the syntax !

Besides CTX_DDL, there are a number of other packages in the prebuilt CTXSYS schema that are available :
 CTX_CLS
 CTX_DDL
 CTX_DOC
 CTX_OUTPUT
 CTX_QUERY
 CTX_REPORT
 CTX_THES
 CTX_ULEXER

Since I have created a separate database user, I can also demonstrate the additional objects that are created when the INDEXTYPE IS CTXSYS.CONTEXT.

SQL> select object_type, object_name, to_char(created,'DD-MON-RR HH24:MI') Crtd
2 from user_objects
3 order by object_type, object_name
4 /

OBJECT_TYPE OBJECT_NAME CRTD
------------------- ------------------------------ ------------------------
INDEX DR$MY_TEXT_INDEX$X 10-DEC-17 16:48
INDEX DRC$MY_TEXT_INDEX$R 10-DEC-17 16:48
INDEX MY_TEXT_INDEX 10-DEC-17 16:48
INDEX SYS_C0017472 10-DEC-17 16:48
INDEX SYS_IL0000045133C00006$$ 10-DEC-17 16:48
INDEX SYS_IL0000045138C00002$$ 10-DEC-17 16:48
INDEX SYS_IOT_TOP_45136 10-DEC-17 16:48
INDEX SYS_IOT_TOP_45142 10-DEC-17 16:48
LOB SYS_LOB0000045133C00006$$ 10-DEC-17 16:48
LOB SYS_LOB0000045138C00002$$ 10-DEC-17 16:48
TABLE DR$MY_TEXT_INDEX$I 10-DEC-17 16:48
TABLE DR$MY_TEXT_INDEX$K 10-DEC-17 16:48
TABLE DR$MY_TEXT_INDEX$N 10-DEC-17 16:48
TABLE DR$MY_TEXT_INDEX$R 10-DEC-17 16:48
TABLE MY_TEXT_TABLE 10-DEC-17 16:48

15 rows selected.

SQL>
SQL> select table_name, constraint_name, index_name
2 from user_constraints
3 where constraint_type = 'P'
4 order by table_name, constraint_name
5 /

TABLE_NAME CONSTRAINT_NAME
------------------------------ ------------------------------
INDEX_NAME
------------------------------
DR$MY_TEXT_INDEX$K SYS_IOT_TOP_45136
SYS_IOT_TOP_45136

DR$MY_TEXT_INDEX$N SYS_IOT_TOP_45142
SYS_IOT_TOP_45142

DR$MY_TEXT_INDEX$R DRC$MY_TEXT_INDEX$R
DRC$MY_TEXT_INDEX$R

MY_TEXT_TABLE SYS_C0017472
SYS_C0017472


SQL>


Yes, that is a large number of database objects besides MY_TEXT_TABLE and MY_TEXT_INDEX.  SYS_C0017472 is, of course, the Primary Key Index on MY_TEXT_TABLE (on the ID_COLUMN column).  The others are interesting.

The "Tokens" I mentioned in the first paragraph are, for the purpose of this table MY_TEXT_TABLE, in the DR$MY_TEXT_INDEX$I.

SQL> desc DR$MY_TEXT_INDEX$I
Name Null? Type
----------------------------------------- -------- ----------------------------
TOKEN_TEXT NOT NULL VARCHAR2(64)
TOKEN_TYPE NOT NULL NUMBER(10)
TOKEN_FIRST NOT NULL NUMBER(10)
TOKEN_LAST NOT NULL NUMBER(10)
TOKEN_COUNT NOT NULL NUMBER(10)
TOKEN_INFO BLOB

SQL> select token_text, token_count
2 from dr$my_text_index$i
3 /

TOKEN_TEXT TOKEN_COUNT
---------------------------------------------------------------- -----------
ANOTHER 1
CAPTURED 1
HEMANT 1
INDEX 1
LONG 2
PIECE 1
TEXT 2
WRITTEN 1

8 rows selected.

SQL>


I have been busy in the last few months and have not published much this quarter. Hopefully, I will get more time in the coming weeks to explore CONTEXT, CTXCAT and Domain Indexes.
.
.
.

Categories: DBA Blogs

Steps For Optimizing Your Website

Nilesh Jethwa - Sat, 2017-12-09 16:57

People use search engines like Google when looking for products or brands these days. In fact, 60 percent of consumers take advantage of Google search just to find what they exactly want, and more than 80 percent of online search results lead to in-store visits as well as sales. So if you want to receive massive traffic and increase your sales, optimizing your website is the perfect solution.

Optimized Website, A Business Owner’s Priority

Search engine optimization or SEO is not that easy to implement. That is because there are constant changes that you need to keep up with such as the algorithms of search engines. For instance, the search algorithm of Google constantly evolves due to their goal of providing searchers with best results they deserve.

It’s not enough to hire a SEO professional to do the job today and stop search optimizing in the next months or years. Experts understand that this is a non-stop process. Your website needs to deal with the changing algorithms, and as long as it is evolving, your website also needs to keep up. This is why an optimized website should be a priority for any business owner across the globe.

Read more at  https://www.infocaptor.com/dashboard/what-are-the-steps-to-optimizing-your-website

how to count the number of records in a file

Tom Kyte - Fri, 2017-12-08 16:06
How do i get to count the number of records in a flat file(.csv/.dat) including the header and trailer records.The code was return in a plsql using utl_file.Can you suggest me the best method to implement the logic in Plsql.And the trailer record sho...
Categories: DBA Blogs

advise given by segment advisor 12c

Tom Kyte - Fri, 2017-12-08 16:06
Hi Tom, My question is, Does <b>segment advisor in 12cR1</b>, advise the use of <b>online table redefination for tables</b> in both d<b>ictionary managed</b> and <b>locally managed tablespace</b>? In addition, What about the <b>use of segm...
Categories: DBA Blogs

Dynamic fields in External File

Tom Kyte - Fri, 2017-12-08 16:06
Sorry if its a weird requirement. I have an input feed coming in like the following(only two rows per file).. col1, col2, col3,Attribute_1,Attribute_2,Attribute_3,......,Attribute_n col1, col2, col3,Value_1,Value_2,Value_3,......,Value_n ...
Categories: DBA Blogs

Reconstruct sale from audit commands

Tom Kyte - Fri, 2017-12-08 16:06
We are running on 11.2.0.4, 12.1 and 12.2. SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_trail ...
Categories: DBA Blogs

Reg: Elapsed time

Tom Kyte - Fri, 2017-12-08 16:06
Hello Team, Quite confused with elapsed time definition. elapsed time is equal to db time. but DBtime= cpu time + wait time. so its correct or not. somewhere i found elapsed time is nothing but wall time. plz comment on above statem...
Categories: DBA Blogs

#UKOUG_TECH17

Yann Neuhaus - Fri, 2017-12-08 15:40
Award

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in 12.2.0.2 concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.

Presentations

Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.
IMG_4806

Community

Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…

ukoug_tech17_geeksbulgugiIMG_4812aceDQTvPKSUIAADCqVachocolate1

 

Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator