Feed aggregator

Error: "Specified partition does not exist" when creating multi-group XML Index on partitioned table

Tom Kyte - Mon, 2018-02-05 15:06
Hi TOM We are trying to create a multi-group XML Index on a partitioned table. Table has no sub-partitions as they are not yet supported by XML Indexes, but we need the XML Index to have more than one group. This is the table: <code>CREAT...
Categories: DBA Blogs

Insert date from Oracle to PostgreSQL

Tom Kyte - Mon, 2018-02-05 15:06
I try to insert date form oracle to postgresql my database date format is dd/mm/yyyy postgresql date format yyyy/mm/dd <code>insert into "public"."test"@PG_LINK select 'test' name, TO_DATE (TO_CHAR(SU.DATE_OF_BIRTH, 'yyyy/mm/dd'),'yyyy/mm/dd...
Categories: DBA Blogs

Do something when the transaction ends

Tom Kyte - Mon, 2018-02-05 15:06
I work with Delphi and Oracle, and what I'm trying to do is change the value of a field by informing a user by mail. This is being done wrongly with a trigger. I thought that in the compound trigger in the after statement the transaction was over (co...
Categories: DBA Blogs

extract an xml node

Tom Kyte - Mon, 2018-02-05 15:06
I have some xml that has a namespace defined in a node within a namespace node ( see Ah:AppHdr ), I would like to extract the value of the element Ah:MsgRef .Can I extract it, if so how? When I try I get an XPATH error. declare l_req xmlType := x...
Categories: DBA Blogs

Can not make read only tablespace which has offline datafile

Tom Kyte - Mon, 2018-02-05 15:06
Hello Dears, A couple of months ago, we had problem: we lost most recent added disk on the server. Which contained latest 4 datafiles. As we had no database/archivelog backup, we had to mark those datafiles offline drop and open the instance. W...
Categories: DBA Blogs

Join Us for the OAUG AppsTech 2018 eLearning Series

Steven Chan - Mon, 2018-02-05 12:15

Mark your calendar and join us for the upcoming OAUG AppsTech eLearning Series.  As part of this OAUG sponsored series, members of Oracle E-Business Suite Development and Product Management will be providing the following webinars:

Technical Essentials for Running Oracle E-Business Suite on Oracle Cloud
Speaker:  Santiago Bastidas,
Date & Time: Tuesday, February 13, 1:00 PM EST

Faster and Better:  Oracle E-Business Suite Desktop Integration
Speaker:  Senthilkumar Ramalingam
Date & Time: Wednesday, February 21, 2018, 1:00 p.m. EST

Oracle E-Business Suite 12.2:  Fusion Middleware (WebLogic Server) Administration
Speakers:  Kevin Hudson and Elke Phelps
Date & Time: Tuesday, February 27, 2018, 1:00 p.m. EST

A complete listing of sessions for the AppsTech eLearning Series is available on the OAUG website.

References

 

Categories: APPS Blogs

What Makes MicroServices Different from SOA?

Jan Kettenis - Mon, 2018-02-05 10:54
In this article I will discuss what is different between MicroServices and a traditional Service Oriented Architecture as such an architecture may look looks like when you know for example Oracle SOA. I also discuss some of misconceptions heard or read concerning MicroServices. It is written by and for a person that knows SOA and is wonders what to do with MicroServices. If MicroServices is what you do already, I probably have little news for you.
I wrote this article many months ago, but somehow forgot to publish.
What's Different Compared to Traditional SOA?In his article on InfoWorld Matt McLarty states that this question should not matter. The real question is: "what we can learn from the SOA movement", and I concur with his 5 important lessons. Nevertheless, even after reading his article, people like me will keep on wondering what the practical implications may be on the way we use our technology now and how we should change that.

All in all most of the MicroServices principles are fundamental to what I would consider to be a "good" Service Oriented Architecture. Of course, there is no such thing as the SOA, although in my opinion many best practices, and lessons learned the hard way, have lead to identifying some generic characteristics of the more successful ones, which in the below I refer to as classical SOA.

The way I see it (from my classical SOA perspective):
Statefull vs StatelessMicroServices are stateless by principle. In SOA it is a best practice to avoid stateful services but that is not a principle. You should try to avoid stateful BPEL, but when creating a composite service that involves one or more asynchronous services, that leaves you little choice. As I explain in my previous blog about MicroServices and BPM and Case Management, the latter two are stateful by definition, so there you also don't have a choice.

However, in case of aynchronous (request/response) communication, some next time you may consider using events instead, where the response is not handled by an asynchronous callback but by publishing an event by means of the EDN or using JMS). Generally this complicates the implementation, but who said that MicroServices did not come with a price?

Reuse
SOA is about reuse. In a classical SOA there often are a number of small, reusable "technical" services that are then reused to compose bigger "business services". Examples include some service to handle asynchronous interaction in a generic way, and a service that retrieves some list of values from a database. We made them to speed up the development process, because creating the next application takes less time by reusing the services we created for the previous one. 

Everybody happy, until a new requirement makes we have to change the generic service with potential impact on all existing applications using it. If you are lucky some regression test suite is available to verify that the existing functionality keeps on working, but even then you may find that people don't feel comfortable unless all the other applications have been retested as well. You then may come to a point where you start wondering if all that reuse was such a great idea.

Much more than classical SOA, MicroServices are about minimal function services build around business capabilities (not necessarily 'fine-grained'), where reuse is even discouraged if that introduces dependencies that may jeopardize business agility. There obviously is reuse with MicroServices (a reusable printing service provides a sensible business capability), but you should for example avoid shared custom Java libraries that are deployed independently. Also in a classical SOA you can avoid this by making sure that you package a specific version of the library with the service so that it will never be impacted by any change unless you want it to.

In general, compared to classical SOA, applying MicroServices principles will make you start thinking differently about the responsibility, and granularity of services. Again, this may come with a price as some functionality may have to be duplicated to support business agility.

Data Services vs Data Replication
In a classical SOA we may not think for a second before deciding we need a (reusable) data service to get customer data. When reading about MicroServices you will find that the (already classical) example of a bad practice is having some sort of a CustomerDataService that may fail, and with that result in the failure of an OrderService to complete successfully.

It is for this reason that the Design for Failure principle implies that a MicroService should have its own data store when possible, and may have its own copy of shared business data like customer data. In this way the successful completion of the OrderService is never dependent on some CustomerDataService to be available. Data is synchronized when necessary and feasible.

You may already have realized that this is a specialization of the reuse issue addressed in the previous section. You will also realize that this is one of the more, if not most complex challenges to address, and the choice to replicate data is not be an easy decision to make.

HTTP vs RESTThe interface of MicroServices should be simple, which almost de facto seems to imply REST (over HTTP) and JSON. With classical SOA this typically is SOAP and XML, although by no means you are limited to that. For a while already we start seeing more and more SOA services with REST interfaces.

Multiple vs Single Containers
With classical SOA many services will be deployed on the same SOA container, all sharing the same infrastructure (data sources, messaging, Operations tooling, etc.) that the container provides. Reuse of that infrastructure being the reason to do so.

However, as a result, one single service behaving badly can impact all other services on the same container. I have seen cases where a single failing service brought down the complete container. One of the reasons to deploy every version of a MicroService in its own container is to prevent this type of issues. In this way it can be scaled, improved, and fixed without affecting any other MicroService. 

ChoreographyAs I explain in my previous posting about MicroServices, there can be quite a few challenges to overcome when business functionality has to be supported by a set of MicroServices working together. Quite a few of those you could be avoided or addressed much more easily when all services would be deployed on the same container (which in a classical SOA is more or less the default), in particular related to monitoring and Operations.

If there is any area in which MicroServices could quickly start adding value to a classical SOA, then it is by orchestrating MicroServices (instead of classical SOA services) in case of Business Process Management or Case Management. Compared to classical SOA, what you will get "for free" is that the cluttering of the orchestration by technical aspects will be kept to the minimum (if existing at all) as you will be orchestrating business functions with (mostly) business-oriented interfaces.

Technology Choices
With classical SOA the technology is limited to what the SOA container supports. For example, in case of Oracle you primarily implement your services using BPEL, Mediator or BPMN, simply because that is the easiest to do. Of course there can be good arguments for restricting the technologies used (even in a MicroServices environment you might want to have guidelines on that) but in practice you may find that this does not always result in the best designed, constructed, and operating service. If all you have is a hammer...

In contrast, MicroServices are polyglot regarding technology, where for each individual MicroService you will use the technology that is best suited considering the functionality you have to provide and the skills present in the team. Different types of MicroServices may have a complete different way of implementation, and using a complete different set of technologies. However, except for the interface, the technology used is completely transparent for the consumer. 

Message TransformationAnother MicroServices principle is smart endpoints / dumb pipes, meaning that there is no transformation or enrichment happening in some Enterprise Service Bus. If an ESB is used then that is limited to routing and perhaps as a layer for enforcing security. In a classical SOA architecture transformation and some types of enrichment is typically done in the Service Bus.
    Some Misconceptions About MicroServicesFinally I would like to address some of the misconceptions I hear and read about MicroServices:
    • DevOps implies MicroServices. It's more the other way around. DevOps is about culture and shared responsibility for the operation of one application. That can also be applied to many other architectures.
    • SOA is not MicroServices. Many see MicroServices is a sub-domain of SOA. As James Lewis and Martin Fowler state, some consider MicroServices as SOA done right.
    • There is no use for a Enterprise Service Bus in a MicroService architecture. Well, you may still need the routing and security features it can offer (see also the section Message Transformation above). Perhaps not the traditional Enterprise Service Bus as we know it, but more something that you could call a "Business Event Bus".

    Start/Stop Extract/Replicat with REST API/JSON

    DBASolved - Sun, 2018-02-04 22:23

    Oracle GoldenGate Microservices Architecture is designed to allow the user to have three different ways of interacting with replication from anywhere. One of these approaches is to use the RESTful APIs that come bundled with release. By using RESTful APIs, an organization can orgistrate how they want GoldenGate to work within their environment.

    In this post, you will take a look at how to start a pre-existing extract/replicat by using the RESTful API end points. To find more information on the APIs that are avaliable, please refere to the Oracle docs located here.

    If you have an existing extract/replicat in a down or pending status, then you can start it using a JSON file and the associated RESTful API end point.

    Now the image above shows you that the extract is stopped. To start the extract using RESTful API, you will need a JSON file that contains the following:

    
    { 
    "$schema":"ogg:command",    
    "name":"start",    
    "processName":"IEXTSOE",    
    "processType":"extract"
    }
    

    Then from the command line, you can use cURL or some other method that accepts RESTful API calls to start the extract.

    curl -u oggadmin:******** -H "Content-Type: application/json" -H "Accept: application/json" -X POST http://http://localhost:16001/services/v2/commands/execute" -d @start_extracts.json | python -mjson.tool
    

    Upon execution of the cURL command, you receive a status response on the command line in JSON output. This response shows you that the extract is starting and started.

    {
    "$schema": "api:standardResponse",
    "links": [
    {
    "href": "http://localhost:16001/services/v2/commands/execute",
    "mediaType": "application/json",
    "rel": "canonical"
    },
    {
    "href": "http://localhost:16001/services/v2/commands/execute",
    "mediaType": "application/json",
    "rel": "self"
    }
    ],
    "messages": [
    {
    "$schema": "ogg:message",
    "code": "OGG-00975",
    "issued": "2018-02-05T03:27:17Z",
    "severity": "INFO",
    "title": "EXTRACT IEXTSOE starting",
    "type": "http://docs.oracle.com/goldengate/c1230/gg-winux/GMESG/oggus.htm#OGG-00975"
    },
    {
    "$schema": "ogg:message",
    "code": "OGG-15426",
    "issued": "2018-02-05T03:27:17Z",
    "severity": "INFO",
    "title": "EXTRACT IEXTSOE started",
    "type": "http://docs.oracle.com/goldengate/c1230/gg-winux/GMESG/oggus.htm#OGG-15426"
    }
    ]
    }
    

    When you go back to the web page for the Administration Service, you see that the extract has been started.

     

    This same process can be used when you want to start/stop an replicat.

    Enjoy!!

    Categories: DBA Blogs

    How to run our Integration tests in Parallel against Oracle with a clean state for every test

    Tom Kyte - Sun, 2018-02-04 20:46
    He have a bunch of junit tests that interact with the DB. As we all know when you write tests that interact with a db you always have the problem of random DB state that can make these types of DB tests brittle. Our DB is very big and has multi schem...
    Categories: DBA Blogs

    Apex item label justification

    Tom Kyte - Sun, 2018-02-04 20:46
    hey, I have an Apex Form Page with several items like that:(blanks instead of -) Number field 1: [ ] -----------field 2: [ ] ------any field 3: [ ] so the item label, is right side. My question is: Is it poss...
    Categories: DBA Blogs

    Say Hello to Red Samurai Contextual Chatbot with TensorFlow Deep Neural Network Learning

    Andrejus Baranovski - Sun, 2018-02-04 02:33
    We are building our own enterprise chatbot. This chatbot helps enterprise users to run various tasks - invoice processing, inventory review, insurance cases review, order process - it will be compatible with various customer applications. Chatbot is based on TensorFlow Machine learning for user input processing. Machine learning helps to identify user intent, our custom algorithm helps to set conversation context and return response. Context gives control over  sequence of conversations under one topic, allowing chatbot to keep meaningful discussion based on user questions/answers. UI part is implemented in two different versions - JET and ADF, to support integration with ADF and JET applications.

    Below is the trace of conversations with chatbot:


    User statement Ok, I would like to submit payment now sets context transaction. If word payment is entered in the context of transaction, payment processing response is returned. Otherwise if there is no context, word payment doesn't return any response. Greeting statement - resets context.

    Intents are defined in JSON structure. List of intents is defined with patterns and tags. When user types text, TensorFlow Machine learning helps to identify pattern and it returns probabilities for matching tags. Tag with highest probability is selected, or if context was set - tag from context. Response for intent is returned randomly, based on provided list. Intent could be associated with context, this helps to group multiple related intents:


    Contextual chatbot is implemented based on excellent tutorial - Contextual Chatbots with Tensorflow. Probably this is one of the best tutorials for chatbot based on TensorFlow. Our chatbot code follows closely ideas and code described there. You could run the same on your TensowFlow environment - code available on GitHub. You should run model first and then response Python notebooks.

    Model notebook trains neural network to recognize intent patterns. We load JSON file with intents into TensorFlow:


    List of intent patterns is prepared to be suitable to feed neural network. Patterns are translated into stemmed words:


    Learning part is done with TensorFlow deep learning library - TFLearn. This library makes it more simple to use TensorFlow for machine learning by providing higher-level API. In particular for our chatbot we are using Deep Neural Network model - DNN:


    Once training is complete and model is created, we can save it for future reuse. This allows to keep model outside of chatbot response processing logic and makes it easier to re-train model on new set of intents when required:


    In response module, we load saved model back:


    Function response acts as entry point to our chatbot. It gets user input and calls classify function. Classification function, based on learned model, returns list of suggested tags for identified intents. Algorithm locates intent by its tag and returns random reply from associated list of replies. If context based reply is returned, only if context was set previously:


    Stay tuned for more blog posts on this topic.

    Ubuntu 17.10: s2disk/hibernate broken with kernel 4.13.0-32

    Dietrich Schroff - Sat, 2018-02-03 14:22
    Last week my notebook refused to startup after s2disk/hibernate. The resume process started up to 100% and then the screen went black and everything stopped...

    Hmm...
    First idea: Something disappeared inside the grub configuration.

    But this was okay.

    After nearly one hour my last try was booting an old kernel. And with
    schroff@zerberus:/boot$ uname -a
    Linux zerberus 4.13.0-17-generic #20-Ubuntu SMP Mon Nov 6 10:04:08 UTC 2017 x86_64 x86_64 x86_64 GNU/Linuxs2disk and resume worked again...


    PURGEOLDEXTRACTS Not Purging Trail Files Part2

    Michael Dinh - Sat, 2018-02-03 10:55
    If you read the post PURGEOLDEXTRACTS Not Purging Trail Files, you will find the solution is to replace syntax from mgr.prm as shown below:
    Replace PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 1
    With    PURGEOLDEXTRACTS /ggs/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 2 
    

    I found the solution by luck vs correct analysis; hence, the adage “Better to be lucky than good.”

    Recently, the same issue occurred again for another environment and the solution was just the opposite.

    PURGEOLDEXTRACTS dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24, FREQUENCYMINUTES 30
    -- PURGEOLDEXTRACTS /DBFS/ggs/dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24, FREQUENCYMINUTES 30
    

    Why is that!

    No convention and inconsistency.

    Here are the details.

    --- How is manager configured?
    GGSCI> send manager GETPURGEOLDEXTRACTS
    
    Sending GETPURGEOLDEXTRACTS request to MANAGER ...
    
    --- Manager is configured with trail pointing to /DBFS
    PurgeOldExtracts Rules
    Fileset                              MinHours MaxHours MinFiles MaxFiles UseCP
    /DBFS/ggs/dirdat/*                   24       0        1        0        Y
    OK	
    --- Extract trail showing from $GG_HOME/dirdat
    Extract Trails
    Filename                        Oldest_Chkpt_Seqno  IsTable  IsVamTwoPhaseCommit
    /u01/app/gg/12.2.0/dirdat/aa    16285
    
    --- How was the extract created?
    GGSCI> send e* status
    
    Sending STATUS request to EXTRACT E_HAWK ...
    
    
    EXTRACT E_HAWK (PID 40932)
      Current status: Recovery complete: At EOF
    
      Current read position:
      Sequence #: 16285
      RBA: 27233729
      Timestamp: 2018-01-25 21:01:35.000450
      Extract Trail: dirdat/aa --- This is how the trail is defined when extract was created
    
    GGSCI> info e*
    
    EXTRACT    E_HAWK    Last Started 2018-01-25 21:22   Status RUNNING
    Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
    Process ID           40932
    Log Read Checkpoint  File dirdat/aa000016286
                         2018-02-01 14:42:29.000124  RBA 29233729
    GGSCI> exit
    
    --- From $GG_HOME, dirdat is using symbolic link to /DBFS
    $ ls -ld dir*
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirchk -> /DBFS/ggs/dirchk
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dircrd -> /DBFS/ggs/dircrd
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdat -> /DBFS/ggs/dirdat
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdef -> /DBFS/ggs/dirdef
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdmp -> /DBFS/ggs/dirdmp
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirout -> /DBFS/ggs/dirout
    drwxr-xr-x 2 ggsuser oinstall 4096 Jan 26 13:49 dirpcs
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirprm -> /DBFS/ggs/dirprm
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirrpt -> /DBFS/ggs/dirrpt
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirsql -> /DBFS/ggs/dirsql
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirtmp -> /DBFS/ggs/dirtmp
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirwlt -> /DBFS/ggs/dirwlt
    lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirwww -> /DBFS/ggs/dirwww
    

    In conclusion, PURGEOLDEXTRACTS location should be defined the same as the extract.

    Isn’t that intuitive?

    Oracle should make this a MOS Doc ;=)

    Rows and column size - how large can a table be

    Tom Kyte - Sat, 2018-02-03 07:46
    Hi Tom, Can an oracle database accommodate 127 million rows with maximum of 17 columns where each character is 1 in both rows and columns
    Categories: DBA Blogs

    Restore archivelogs from RMAN backup

    Learn DB Concepts with me... - Fri, 2018-02-02 14:06




    Restore archive logs from RMAN backup


    rman> restore archivelog from logseq=37501 until logseq=37798 thread=1;

    or

    rmna> restore archivelog between sequence 37501 and 37798 ;
    Categories: DBA Blogs

    Difference Between Unique Index and Primary Key Index

    Tom Kyte - Fri, 2018-02-02 13:26
    IS fetching row(s) using Primary key Index (in where clause) is better then Fetching row(s) using Unique Index (in where clause)? Is there any internal difference between those Unique Index and Primary Key Index?
    Categories: DBA Blogs

    PL/SQL procedure issue (bulk collect and insert on a dynamic table)

    Tom Kyte - Fri, 2018-02-02 13:26
    Hi All, Requirement : We need to pass table name as a parameter into the Procedure and also further I need to concatenate that value with a string to make a correct table name which will be available in database. For ex :As per my code , if a...
    Categories: DBA Blogs

    Lag Analytics Function

    Tom Kyte - Fri, 2018-02-02 13:26
    Hi Tom, Can you please help me with the below: I have a table like below: <code>Yr Qtr Mth Sales 2010 1 1 1000 2010 1 2 2000 2010 1 3 2500 2010 2 4 3000 2010 2 5 3500 2010 2 ...
    Categories: DBA Blogs

    How to use ASH report

    Tom Kyte - Fri, 2018-02-02 13:26
    Hi Tom, Few days back I attended an interview, and one question which I failed to answer properly,was How to use ASH report. I know few things about ASH report like Top user event,Top CPU time and which SQL has spent how much % on which event. But...
    Categories: DBA Blogs

    January 2018 Update to E-Business Suite Technology Codelevel Checker (ETCC)

    Steven Chan - Fri, 2018-02-02 10:49

    The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

    What’s New

    ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

    • Oracle Database Proactive BP 12.1.0.2.180116
    • Oracle Database PSU 12.1.0.2.180116
    • Oracle JavaVM Component Database PSU 12.1.0.2.180116
    • Oracle Database Patch for Exadata BP 12.1.0.2.180116
    • Oracle Database PSU 12.1.0.2.180116
    • Oracle JavaVM Component Database PSU 12.1.0.2.180116
    • Microsoft Windows Database BP 12.1.0.2.180116
    • Oracle JavaVM Component 12.1.0.2.180116on Windows
    • Microsoft Windows Database BP 12.1.0.2.180116
    • Oracle JavaVM Component 12.1.0.2.180116on Windows

    Obtaining ETCC

    We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

    References

    Related Articles

    Categories: APPS Blogs

    Pages

    Subscribe to Oracle FAQ aggregator