Feed aggregator

Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan; O'Reilly Media

Surachart Opun - Thu, 2014-10-09 03:37
Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. How to deliver log to Hadoop HDFS. Apache Flume is open source to integrate with HDFS, HBASE and it's a good choice to implement for log data real-time collection from front end or log data system.
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.

This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
  • Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
  • Dive into key Flume components, including sources that accept data and sinks that write and deliver it
  • Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
  • Explore APIs for sending data to Flume agents from your own applications
  • Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Book: Using Flume - Flexible, Scalable, and Reliable Data Streaming
Author: Hari Shreedharan
Categories: DBA Blogs

11 Tips To Get Your Conference Abstract Accepted

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.11 Ways To Get Your Conference Abstract Accepted
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.

The focus of this posting is how you can get a conference abstract accepted.

As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.

1. No Surprises! 
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.

Tip: In all your conference communications, demonstrate a commitment to follow through.

2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.

Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.

3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.

Tip: People forget bullet points, but they never forget a good story.

4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.

Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract. 

5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)

Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.

6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.

Tip: Submit both an introductory level version and advanced level version of your topic.

7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?

Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.

8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.

Do not simply change the title and the abstract can not be the same.  If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.

Tip: Look for ways to adjust your topic to fit into multiple tracks.

9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.

Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.

10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"

Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"

11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.

Tip: Submit on topics you have explored and are fascinated with.

How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.

Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!

See you at the conference!

Craig.


Categories: DBA Blogs

Is Oracle Application Express supported?

Joel Kallman - Wed, 2014-10-08 13:38

Time to clear up some confusion.

In the past 60 days, I have encountered the following:
  • Two different customers who said they were told by Oracle Support that "APEX isn't supported."
  • An industry analyst who asked "Is use of Oracle Application Express supported?  There is an argument internally that it cannot be used for production applications."
  • A customer who was told by an external non-Oracle consultant "Oracle Application Express is good for a development environment but we don't see it being used in production."  I'm not even sure what that means.
To address these concerns as a whole, let me offer the following:
  1. Oracle Application Express is considered a feature of the Oracle Database.  It isn't classified as "free", even though there is no separate licensing fee for it.  It is classified as an included feature of the Oracle Database, no differently than XML DB, Oracle Text, Oracle Multimedia, etc.
  2. If you are licensed and supported for your Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database.  Many customers aren't even aware that they are licensed for it.
  3. If you download a later version of Oracle Application Express made available for download from the Oracle Technology Network and install it into your Oracle Database, as long as you are licensed and supported for that Oracle Database, you are licensed and supported (by Oracle Support) for Oracle Application Express in that database.
  4. Oracle Application Express is listed in the Lifetime Support Policy: Oracle Technology Products document.

As far as the customers who believed they were told directly by Oracle Support that Oracle Application Express isn't supported, there was a common misunderstanding.  In their Service Requests to Oracle Support, they were told that Oracle REST Data Services (formerly called Oracle Application Express Listener, the Web front-end to Oracle Application Express) running in stand-alone mode isn't supported.  This is expressed in the Oracle REST Data Services documentation.  However, this does not pertain to the supportability of Oracle Application Express.  Additionally, a customer can run Oracle REST Data Services in a supported fashion in specific versions of Oracle WebLogic Server, Glassfish Server, and Apache Tomcat.  To reiterate - running Oracle REST Data Services in standalone mode is the one method which is not supported in production deployments, as articulated in the documentation - however, you can run it supported in Oracle WebLogic Server, Glassfish Server and Apache Tomcat.

Oracle Application Express has been a supported feature of the Oracle Database since 2004, since it first shipped as Oracle HTML DB 1.5 in Oracle Database 10gR1.  Every subsequent version of Oracle Application Express has been supported by Oracle Support when run in a licensed and supported Oracle Database.  Anyone who says otherwise is...confused.

How do I type e acute (é) on Windows 8

Robert Baillie - Wed, 2014-10-08 09:32
I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol) I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code. They all suck. And then I remember - it's easy. You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented. For example, CTRL + ' followed by e gives you é. Brilliant! The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes Here are the ones I know about: KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a =...

How do I type e acute (é) on Windows 8

Rob Baillie - Wed, 2014-10-08 09:27

I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)

I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.

They all suck.

And then I remember - it's easy.

You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.

For example, CTRL + ' followed by e gives you é.

Brilliant!

The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes

Here are the ones I know about:

KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.

Comparing SQL Execution Times From Different Systems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 

And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.

Determining The "Speed Value"

Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.

Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.

As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.

You can download the tool HERE.

How To Analyze The Risk

Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."

Now let's put this all together and quickly go through three examples.

This is the core math:

standardized elapsed time = sql elapsed time * system speed value

So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.

To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.

1. The SQL truly runs the same in both systems.

Here is the math:

QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds

PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds

In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.

2. The SQL runs faster in production.

Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."

Here are the scenario numbers:

QAT standardized elapsed time = 30 seconds X 300 = 9000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds

In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.

3. The SQL runs faster in QAT.

In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.

QAT standardized elapsed time = 15 seconds X 300 = 4500 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 

In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.

In Summary...

Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

What's Next?

Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 05:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Categories: DBA Blogs

What I like best about myself

FeuerThoughts - Sun, 2014-10-05 09:23
What could be more self-centered?

Why should anyone else in the world care what I like best about myself?

I have no idea. That is for sure. But, hey, what can I say? This is the world we live in (I mean: the artificial environment humans have created, mainly to avoid actually living in and on our amazing world).

It is an age of, ahem, sharing. And, ahem, advertising. Actually, first and foremost, advertising.

Anyway, screw all that. Here's what I like best about myself:

I love to be with kids. And I am, to put it stupidly but perhaps clearly, a kid whisperer.

Given the choice between spending time with an adult or spending time with a child, there is no contest. None at all. It's a bit of a compulsion, I suppose, but....

If there is a child in the room, I pay them all of my attention, I cannot stop myself from doing this. It just happens. Adults, for the most part, disappear. I engage with a child as a peer, another whole human. And usually children respond to me instantly and with great enthusiasm. 

Chances are, if your child is between, say, three months old to five years, we will be fast friends within minutes. Your cranky baby might fall asleep in my arms, as I sing Moonshadow to her or whisper nonsense words in her ear. Your shy three-year old son might find himself talking excitedly about a snake he saw on a trail that day (he hadn't mentioned it to you). Your teenage daughter might be telling me about playing games on her phone and how she doesn't think her dad realizes how much she is doing it.

I have the most amazing discussions with children. And though I bet this will sound strange to you: some of my favorite and memorable conversations have been with five month old babies. How is this possible, you might wonder. They can't even talk. Well, you can find ouit. Just try this at home with your baby:

Hold her about a foot away from your face, cradled in your arms. Look deeply and fully into her eyes. Smile deeply. And then say something along these lines, moving your mouth slowly: "Ooooh. Aaaaah. Maaaaa. Paaaaa." And then she will (sometimes) answer back, eyes never leaving yours....and you have a conversation. Your very first game of verbal Ping Pong. 

I suppose I could try to explain the feeling of pure happiness I experience at moments like this. I don't think, though, that written language is good for stuff like that. It's better for recording knowledge needed to destroy more and more of our planet to make humans comfortable.

And with my granddaughter, oh, don't even get me started. Sometimes I will be talking to her, our heads close together, and realize her face has gone into this kind of open, relaxed state in which she is rapt, almost in a trance, absorbing everything I am saying, the sound of my voice, my mouth moving. Just taking it all in. You'd better believe that I put some thought into what I am saying to this incredibly smart and observant "big girl." (who turns three in three weeks)

Here's another "try this at home" with your three year old (or two or four): talk about shadows. Where do they come from/ How do they relate to your body? Why does their shape change as the day goes on? Loey and I have had fun with shadows several times.

I have always been this way. I have no idea why. I have this funny feeling that it might actually be at least in some small way the result of a genetic mutation. I have a nephew who resembles me in several different, seemingly unconnected ways, including this love of and deep affinity for children.

I don't think that many people understand what I am doing when I spend time with children. I am called a "doting" grandfather. It offends me, though I certainly understand that no offense was intended.

I don't dote on Loey. Instead,I  seek out every opportunity to share my wonder of our world and life with her, help her understand and live in the world as effectively as possible. What this has meant lately is that I talk with her a lot about trees, how much I love them, how amazing they are. 

One day at the park, as we walked past the entrance to the playground, I noticed a very small oak sapling - in essence, a baby oak tree.

When we got inside the park, there was a mature oak towering over our stroller. I asked Loey if she wanted to see a baby tree. She said yes, so I picked her up to get close to the mature oak's leaf. I showed her the shape of the leaf, and the big tree to which it was attached.

Then I took her outside and we looked at the sapling. I showed her how the leaves on this tiny baby tree were the same, shape and size, as those on the big tree. That's how we knew it was a baby of that big tree. And it certainly was interesting that the leaves would be the same size on the tiny sapling. Held her attention throughout. That was deeply satisfying.

Mostly what I do is look children directly in the eyes, give them my full attention, smile with great joy at seeing them. Babies are deeply hard-wired to read faces. They can see in the wrinkles around my widened eyes and the smile that is stretching across my face that I love them, accept them fully. And with that more or less physical connection established, they seem to relax, melt, soften with trust. They know they can trust me, and they are absolutely correct. 

In that moment, I would do anything for them.

This wisdom (that's how I see it) to accept the primacy of our young, my willingness to appear to adults as absolutely foolish, but to a child appear as a bright light, making them glow right back at me:

That is what I like best about me. 
Categories: Development

An OOW Summary from the ADF and MAF perspective

Shay Shmeltzer - Fri, 2014-10-03 13:39

Another Oracle OpenWorld is behind us, and it was certainly a busy one for us. In case you didn't have a chance to attend, or follow the twitter frenzy during the week, here are the key take aways that you should be aware of if you are developing with either Oracle ADF or Oracle MAF.

 Oracle Alta UI

We released our design patterns for building modern applications for multiple channels. This include a new skin and many samples that show you how to create the type of UIs that we are now using for our modern cloud based interfaces.

All the resources are at http://bit.ly/oraclealta

The nice thing is that you can start using it today in both Oracle ADF Faces and Oracle MAF - just switch the skin to get the basic color scheme. Instructions here.

Note however that Alta is much more than just a color change, if you really want an Alta type UI you need to start designing your UI differently - take a look at some of the screen samples or our demo application for ideas.

Cloud Based Development

A few weeks before OOW we released our Developer Cloud Service in production, and our booth and sessions showing this were quite popular. For those who are not familiar, the Developer Cloud Service, gives you a hosted environment for managing your code life cycle (git version management, Hudson continuos integration, and easy cloud deployment), and it also gives you a way to track your requirements, and manage team work.

While this would be relevant to any Java developing team, for ADF developers there are specific templates in place to make things even easier.

You can get to experience this in a trial mode by getting a trial Java service account here.

Another developer oriented cloud service that got a lot of focus this year was on the upcoming Oracle Mobile Cloud Service - which includes everything your team will need in order to build mobile backends (APIs, Connectors, Notification, Storage and more). We ran multiple hands-on labs and sessions covering this, and it was featured in many keynotes too.

 In the Application development tools general session we also announced that in the future we'll provide a capability called Oracle Mobile Application Accelerator (which we call Oracle MAX for short) which will allow power users to build on device mobile applications easily through a web interface. The applications will leverage MAF as the framework, and as a MAF developer you'll be able to provide additional templates, components and functionality for those.

Another capability we showed in the same session was a cloud based development environment that we are planning to add to both the Developer Cloud Service and the Mobile Cloud Service - for developers to be able to code in the cloud with the usual functions that you would expect from a modern code editor.

dcs

The Developer Community is Alive and Kicking

The ADF and MAF sessions were quite full this year, and additional community activities were successful as well. Starting with a set of ADF/MAF session by users on the Sunday courtesy of ODTUG and the ADF EMG. In one of the sessions there members of the community announced a new ADF data control for XML. Check out the work they did!

ODTUG also hosted a nice meet up for ADF/MAF developers, and announced their upcoming mobile conference in December. They also have their upcoming KScope15 summer conference that is looking for your abstract right now!

Coding Competition

Want to earn some money on the side? Check out the Oracle MAF Developer Challenge - build a mobile app and you can earn prizes that range from $6,000 to $1,000.

Sessions

With so many events taking place it sometime hard to hit all the sessions that you are interested in. And while the best experience is to be in the room, you might get some mileage from just looking at the slides. You can find the slides for many sessions in the session catalog here. And a list of the ADF/MAF sessions here.

See you next year. 

Categories: Development

Virtualbox: only 32bit guests possible even though virtualization enabled in BIOS / Intel Process Identification Utility shows opposite to BIOS virtualization setting

Dietrich Schroff - Fri, 2014-10-03 04:08
Virtualbox on my Windows 8.1 stopped running 64bit guests a while ago. I did not track down this problem. Now some months later i tried again and found some confusing things.

First setting:
BIOS virtualization enabled
Intel Processor Identification Utlility in 8.1: virtualization disabled
Second setting
BIOS virtualization disabled
Intel Processor Identification Utlility in 8.1: virtualization enabledWith both settings: Virtualbox runs 32bit guests but no 64bit guests.
 

After some searching, i realized, what was happening:
I added Microsofts Hyper-V virtualization. With that enabled Windows 8.1 is no longer a real host. It is just another guest (the most important guest) on this computer. So with Hyper-V enabled i was trying to run Virtualbox inside an already virtualized Windows 8.1.
After that it was easy: Just disable Hyper-V on Windows 8.1:


And after a restart of Windows 8.1 i was able to run 64bit guests on Virtualbox again....

Multi Sheet Excel Output

Tim Dexter - Thu, 2014-10-02 18:28

Im on a roll with posts. This blog can be rebuilt ...

I received a question today from Camilo in Colombia asking how to achieve the following.

‘What are my options to deliver excel files with multiple sheets? I know we can split 1 report in multiple sheets in with the BIP Advanced Options, but what if I want to have 1 report / sheet? Where each report in each sheet has a independent data model ….’

Well, its not going to be easy if you have to have completely separate data models for each sheet. That would require generating multiple Excel outputs and then merging them, somehow.

However, if you can live with a single data model with multiple data sets i.e. queries that connect to separate data sources. Something like this:


Then we can help. Each query is returning its own data set but they will all be presented together in a single data set that BIP can then render. Our data structure in the XML output would be:

<DS>
 <G1>
  ...
 </G1>
 <G2>
  ...
 </G2>
 <G3>
  ...
 </G3>
</DS>

Three distinct data sets within the same data output.

To get each to sit on a separate sheet within the Excel output is pretty simple. It depends on how much native Excel functionality you want.

Using an RTF template you just create the layouts for each data set on a page(s) separated by a page break (Ctrl-Enter.) At runtime, BIP will place each output onto a separate sheet in the workbook. If you want to name each sheet you can use the <?spreadsheet-sheet-name: xpath-expression?> command. More info here. That’s as sophisticated as it gets with the RTF templates. No calcs, no formulas, etc. Just put the output on a sheet, bam!

Using an Excel template you can get more sophisticated with the layout.



This time thou, you create the layout for each data model on separate sheets. In my example, sheet 1 holds the department data, sheet 2, the employee data and so on. Some conditional formatting has snuck in there.

I have zipped up the sample files here.

FIN!

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Calibri; mso-bidi-theme-font:minor-latin;}
Categories: BI & Warehousing

Advanced Queue Quickie: Errors and Privileges

Don Seiler - Wed, 2014-10-01 23:07
File this one under the misleading-errors department. One of my developers was working with a new queue. He pinged me when he got this error trying to create a job that used the queue:

ERROR at line 1:
ORA-27373: unknown or illegal event source queue
ORA-06512: at "SYS.DBMS_ISCHED", line 124
ORA-06512: at "SYS.DBMS_SCHEDULER", line 314
ORA-06512: at line 2

The CREATE_JOB statement was:

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'foo.bar_q_job',
job_type => 'PLSQL_BLOCK',
job_action => 'begin foo.bar_pkg.consume_bar_queue(); end;',
queue_spec => 'BAR.BAR_Q, FOO_BAR_AGENT',
enabled => true,
comments => 'This is a job to consume the bar.bar_q entries that affect foo.');
END;
/

After a few minutes of banging our heads, it became obvious that this was a permissions problem. The queue was owned by BAR, the job was being created as FOO. The ORA error message could/should have made this more obvious, in my opinion.

Anyway, the fix was simply to grant access to FOO:

DBMS_AQADM.GRANT_QUEUE_PRIVILEGE(
        privilege  => 'ALL',
        queue_name => 'bar.bar_q',
        grantee    => 'foo' );

Hope this saves some banged heads for others.
Categories: DBA Blogs

Oracle APEX 5 Update from OOW

Scott Spendolini - Wed, 2014-10-01 09:18
The big news about Oracle APEX from OOW is not so much about what, but more about when.  Much to many people's disappointment, APEX 5.0 is still going to be a few months out.  The "official" release date has been updated from "calendar year 2014" to "fiscal year 2015".  For those not in the know, Oracle's fiscal year ends on May 31st, so that date represents the new high-water mark.

Despite this bit of bad news, there were a number of bits of good news as well.  First of all, there will be an EA3.  This is good because it demonstrates that the team has been hard at work fixing bugs and adding features.  Based on the live demonstrations that were presented, there are some subtle and some not-so-subtle things to look forward to.  The subtle include an even more refined UI, complete with smooth fade-through transitions.  I tweeted about the not-so-subtle the other day, but to recap here: pivot functionality in IRs, column toggle and reflow in jQuery Mobile.

After (or right before - it wasn't 100% clear) that E3 is released, the Oracle APEX team will host their first public beta program.  This will enable select customers to download and install APEX 5.0 on their own hardware.  This is an extraordinary and much-needed positive change in their release cycle, as for the first time, customers can upgrade their actual applications in their environment and see what implications APEX 5.0 will bring.  Doing a real-world upgrade on actual APEX applications is something that the EA instances could never even come close to pulling of.

After the public beta, Oracle will upgrade their internal systems to APEX 5.0 - and there's a lot of those.  At last count, I think the number of workspaces was just north of 3,000.  After the internal upgrade, apex.oracle.com will have it's turn.  And once that is complete, we can expect APEX 5.0 to be released.

No one like delays.  But in this case, it seems that the extra time required is quite justified, as APEX 5.0 still needs some work, and the upgrade path from 4.x needs to be nothing short of rock-solid.  Keep in mind that with each release, there are a larger number of customers using a larger number of applications, so ensuring that their upgrade experience is as smooth as possible is just as, if not more important than any new functionality.

In the mean time, keep kicking the tires on the EA instance and provide any feedback or bug reports!

Packt Publishing - ALL eBooks and Videos are just $10 each or less until the 2nd of October

Surachart Opun - Tue, 2014-09-30 11:36
Just spread good campaign from Packt Publishing - It's a good news for people who love to learn something new - ALL eBooks and Videos are just $10 or less -- the more you choose to learn, the more you save:
  • Any 1 or 2 eBooks/Videos -- $10 each
  • Any 3-5 eBooks/Videos -- $8 each
  • Any 6 or more eBooks/Videos -- $6 each


Categories: DBA Blogs

Oracle Exalytics X4-4 - Bigger, Better, Stronger

Look Smarter Than You Are - Sun, 2014-09-28 11:49
X4-4 - Same price as the X3-4 but with more powerThe big announcement about it is today at OpenWorld (it would be awesome if they mentioned it during the Intel keynote tonight), but the Exalytics X4-4 is actually available now.  It's the same price as the X3-4 ($175,000 at list not including software, maintenance, tax, title, license, yada yada).  This does mean the X3 is - effective immediately - no longer available, but then again, since the new one is the same price, I'm not sure why anyone would want the older one.  No word yet on if you can upgrade an X3 to an X4, but since they did offer an upgrade kit from X2 to X3 (though I never heard of anyone buying it), I'm guessing there will be one for those wanting to make an X3 into an X4.
X4-4 SpecsThe main improvement over the X3 is the number of cores: it's still 4 Intel chips, but those chips all now have 15 cores on them, meaning the X4 has 60 cores compared to the X3's 40 cores.  Here are the important details:

  • 4 Intel Xeon E7-8895v2 processors running at 2.8 - 3.6 GHz
  • 8 - 60 cores (capacity on demand, more on this in a second)
  • 2 TB of RAM
  • 2.4 TB of PCI flash
  • 7.2 TB of hard disk running at 10K RPMs (not that fast these days)
  • 2 Infiniband ports running at 40 Gb/s
  • 4 Ethernet ports running at up to 10 Gb/s
Cool Thing 1: Variable Speed & Cores

You probably heard about this last July.  Oracle worked with Intel to design a line of their Xeon E7-889x chips specifically for Oracle.  What we didn't realize until we saw it show up on the X4 spec sheet was that the chips were going in the Exalytics X4.  Simply put, on the fly, Exalytics can vary how many cores it uses and when it's fewer cores, the speed goes up.  If it's running 15 cores per chip, Intel sets the speed to 2.8 GHz.  If it's only using 2 cores per chip the speed goes all the way to 3.6 GHz (a GHz is one billion clock ticks per second).


But wait, you math geniuses say.  Isn't 3.6 * 2 less than 2.8 * 15 (so why wouldn't Oracle just always leave all 60 cores on at the slower speed)?  Well, yes, if you're actually using all those cores, and this is where you know the chip was apparently designed for Essbase (though it did premiere in Exadata first).  As much as I love my Essbase, there are still transactions that end up single threading (or using far less than the available cores on the box).

Say I'm running a massive allocation and despite my best efforts (and FIXPARALLEL), it's still single threading or running at 8 CPUs or fewer.  In this case, Exalytics is now smart enough to talk to those impressive new E7-8895v2 chips, scale down to as few cores as are actually needed, and in the process, up the clock speed for the remaining cores.  Take that, commodity hardware.  This really is the killer feature that makes Exalytics do something no other server running Essbase can do.

On a side note, Intel seems to be dropping the power on the non-used cores to nearly zero when not in use meaning the power consumption of your Exalytics box actually lowers on-demand.  So if your boss won't sign off on your new Exalytics X4, tell her she hates the planet.

Cool Thing 2: You Don't Need BIFS
Per the current Engineered Systems Price List (buried down in note 13), you longer have to purchase BIFS (BI Foundation Suite) to buy Exalytics (either the X4 or T5).  You can now own BIFS, OBIEE, Essbase+, or Hyperion Planning+ without having to get a VP to sign off for a special exemption.  That's right, Planning people preferring to purchase pure premium power, you can now buy Exalytics.  With this change, I presume that any new Planning customer looking for the best user experience will be buying Exalytics X4 along with Planning.

Also buried in the footnotes, you apparently can now buy Exalytics for as few as 20 named users.  Last time I checked (and I don't read every edition of the footnotes, haters who think I have no life), the minimum was 100 named users.

What's Next: HFM on Exalytics
We heard about it on the opening developer's day at Kscope: HFM should finally run on Exalytics in version 11.1.2.4 (which we're hoping to see by the end of 2014).  I'm not sure if it will run on both the T5 (Solaris) and the X4 (Linux) by year-end, but Linux is almost a given.  That said, I don't work for Oracle, so don't base any buying decisions on the belief that HFM will definitely run on the X4.  Just when it happens, be pleasantly surprised that you can now consolidate all your major Oracle Business Analytics apps together.

So any T5 news?  Not at the moment. It's still available running it's 128 cores with 4 TB of RAM (and other cool things) so if you're looking for major horsepower and server consolidation, look to the T5.

I'll be updating this post after the OpenWorld keynote to include any new Exalytics news but if you hear any other Exalytics updates in the meantime, post it in the comments.

Categories: BI & Warehousing

oracle.ias.cache.CacheFullException: J2EE JOC-017 The cache is full

Vikram Das - Sat, 2014-09-27 11:40
Yesterday, the users of an EBS R12.2 instance got this error when they logged in:

Error Page
You have Encountered an unexpected error.  Please contact the System Administrator for assistance.

On checking the $EBS_DOMAIN_HOME/servers/oacore_server1/logs/oacore_server1.out, we found this error:



oracle.ias.cache.CacheFullException: J2EE JOC-017 The cache is full.
       at oracle.ias.cache.CacheHandle.findObject(CacheHandle.java:1680)
       at oracle.ias.cache.CacheHandle.locateObject(CacheHandle.java:1118)
       at oracle.ias.cache.CacheAccess.get(CacheAccess.java:877)
       at oracle.apps.jtf.cache.IASCacheProvider.get(IASCacheProvider.java:771)
       at oracle.apps.jtf.cache.CacheManager.getInternal(CacheManager.java:4802)
       at oracle.apps.jtf.cache.CacheManager.get(CacheManager.java:4624)
       at oracle.apps.fnd.cache.AppsCache.get(Unknown Source)
       at oracle.apps.fnd.functionSecurity.Grant.getGrantArray(Unknown Source)
       at oracle.apps.fnd.functionSecurity.Authorization.getFunctionSecurityGrantedMenusForGrantee(Authorization.java:829)
       at oracle.apps.fnd.functionSecurity.Authorization.getFunctionSecurityGrantedMenus(Authorization.java:744)
       at oracle.apps.fnd.functionSecurity.Authorization.getFuncSecGrants(Authorization.java:251)
       at oracle.apps.fnd.functionSecurity.Authorization.testMenuTreeFunction(Authorization.java:499)
       at oracle.apps.fnd.functionSecurity.Navigation.getMenuTree(Navigation.java:254)
       at oracle.apps.fnd.functionSecurity.Navigation.getMenuTree(Navigation.java:279)
       at oracle.apps.fnd.functionSecurity.Navigation.getMenuTree(Navigation.java:160)

We tried bouncing services and deleting $EBS_DOMAIN_HOME/servers/oacore_server1/cache.  None of those actions helped.  Things got back to normal only after the Xmx, Xms,and permsize startup parameters for the oacore JVM were changed in weblogic console on Gary's suggestion:

-XX:PermSize=512m -XX:MaxPermSize=512m –Xms4096m –Xmx4096m

I also changed it in the context file:

Old: s_oacore_jvm_start_options">-XX:PermSize=128m -XX:MaxPermSize=384m -Xms512m -Xmx512m
New: s_oacore_jvm_start_options">-XX:PermSize=512m -XX:MaxPermSize=512m –Xms4096m –Xmx4096m

The oacore_server1 and oacore_server2 were bounced after this.  We haven't seen that error ever since.


There is a support.oracle.com article: Receive Intermittent Error You Have Encountered An Unexpected Error. Please Contact Your System Administrator (Doc ID 1519012.1)
CauseThere are user accounts having extremely high numbers of FUN_ADHOC_RECI_XXXXXXX / FUN_ADHOC_INIT_XXXXXXX assigned.

Users have an extremely high number of (ADHOC) ROLES assigned to them, so when these attempt to login this fills the JOC ( Java object cache ) and causes it to run into it's limits resulting in the errors reported. Once bounce is done all is working fine until such an user logs in again.

While working in AGIS and creating and progressing batches, in the workflow there are several ad hoc roles created which remain on the system and do not get end dated or deleted. This can cause performance issues.
Ad Hoc Roles in WF_LOCAL_ROLES starting with FUN_ADHOC_RECI_XXXXXXX ; FUN_ADHOC_INIT_XXXXXXX with no expiration_date.

A. Run the following SQL to verify if there are accounts having extreme numbers of roles assigned

SQL> SELECT user_name, count(*) FROM wf_user_roles WHERE role_name <> user_name GROUP BY user_name ORDER BY 2;


B. Run following for particular user

SQL> SELECT distinct role_name FROM wf_user_roles
WHERE user_name = cp_user_name
or (user_name = (SELECT name FROM wf_local_roles wlr, fnd_user fusr
WHERE fusr.person_party_id = wlr.orig_system_id
AND wlr.orig_system = 'HZ_PARTY'
AND fusr.user_name = cp_user_name
AND rownum < 2))
AND role_name <> user_name;

Note: Replace cp_user_name with name of user having high number of ADHOC roles


SolutionTo implement the solution, please execute the following steps:

1. Ensure that you have taken a backup of your system before applying the recommended solution.

2. Follow the steps given in document to purge the WF_LOCAL_ROLES for the AGIS transactions in 'COMPLETE' status.

AGIS: HOW TO DELETE AD HOC ROLES CREATED IN WORKFLOW (Doc ID 1446561.1)

3. If you are satisfied that the issue is resolved, migrate the solution as appropriate to other environments.



We had also logged SR with Oracle where they pointed us to the very same article and also asked us to do the following:


Action Plan
===========

1. How to find out the existing adhoc roles?

select name, start_date, start_date, expiration_date
from wf_local_roles
where orig_system = 'WF_LOCAL_ROLES'
order by name;

2. Define an expiration date for the ad hoc role:

exec WF_DIRECTORY.SetAdHocRoleExpiration (role_name=> >,expiration_date=>sysdate-1);


3. Periodically, purge expired users and roles in order to improve performance.

exec WF_PURGE.Directory(end_date);

This purges all users and roles in the WF_LOCAL_ROLES,WF_LOCAL_USER_ROLES, and WF_USER_ROLE_ASSIGNMENTS tables whose expiration date is less than or
equal to the specified end date and that are not referenced in any notification.

Parameter: end_date Date to purge to.

For more information, please refer to Oracle Workflow API Reference on page 2 – 128.
Use the workflow API's to purge the ad hoc roles:

NOTE:
After end dating the adhoc roles, the expired adhoc roles can also be purged by running the Purge Obsolete Workflow Runtime Data concurrent program. Make sure the "Core Workflow Only" parameter set to N.

Oracle also shared 3 open bugs (unpublished, can be read only Oracle employees) for this issue:

Bug 19025537 : ORACLE.IAS.CACHE.CACHEFULLEXCEPTION: J2EE JOC-017 THE CACHE IS FULL.
Bug 11772304 : JOC INVESTIGATION WITH 12.2
Bug 19582421 : R12.2 THE CACHE IS FULL; EXCEPTION IN OACORE_SERVER1 LOG.

Oracle finally shared the contents of Bug 19582421:

Action Plan
==========

Please review the following from Bug 19582421 : R12.2 THE CACHE IS FULL; EXCEPTION IN OACORE_SERVER1 LOG.


Workaround Steps:

1. Extract CacheDefaultConfig.xml from cache.jar


cd $FMW_HOME/oracle_common/modules/oracle.javacache_11.1.1
jar -xf cache.jar CacheDefaultConfig.xml

2. Edit CacheDefaultConfig.xml.
- diskCache size was the parameter that fixed it.
- changed the max-objects as well as per the bug.

Original:

xmlns="http://www.oracle.com/oracle/ias/cache/configuration11"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" max-objects="5000"
max-size="10" private="false" cache-dump-path="jocdump" system="false"
clean-interval="60" version="11.1.1.2.0" internal-version="110000">

init-retry-delay="2000" enable-ssl="false" auto-recover="false">
dedicated-coordinator="false" outOfProc="false">




default-level="SEVERE"/>




Modified:

xmlns="http://www.oracle.com/oracle/ias/cache/configuration11"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" max-objects="100000"
max-size="50" private="false" cache-dump-pa
th="jocdump" system="false" clean-interval="60" version="11.1.1.2.0"
internal-version="110000">

init-retry-delay="2000" enable-ssl="false" auto-recover="false">
dedicated-coordinator="false" outOfProc="false">





default-level="SEVERE"/>




3. Upload the changed file to the jar file

jar uf cache.jar CacheDefaultConfig.xml

4. Modify javacache.xml and make the same changes. This file is probably not
getting used. But made the changes anyway to keep the values in sync.
cd $EBS_DOMAIN_HOME/config/fmwconfig/servers/oacore_server1

5. Bounce apache and oacore.

These 5 steps resolved the issue.

Categories: APPS Blogs

I Heart Logs - Event Data, Stream Processing, and Data Integration by Jay Kreps; O'Reilly Media

Surachart Opun - Sat, 2014-09-27 00:01
As I have worked in server-side a long time as System Administrator. I must spend with logs. To use it for checking and investigation in issue. As some policies in some Companies, they want to keep logs over year or over ten years. So, it is not unusual to find out idea to store, integrate logs and do something.
A book tittle "I Heart Logs - Event Data, Stream Processing, and Data Integration" by Jay Kreps. It's very interesting. I'd like to know what I can learn from it, how logs work in distributed systems and learn from author who works at LinkedIn. A book! Not much for the number of pages. However, it gives much more for data flow idea, how logs work and author still shows readers why logs are worthy of reader's attention. In a book, that has only 4 chapters, but readers will get concept and idea about Data integration (Making all of an organization’s data easily available in all its storage and processing systems), Real-time data processing (Computing derived data streams) and Distributed system design (How practical systems can by simplified with a log-centric design). In addition, I like it. because author wrote from his experience at LinkedIn.

After reviewing: A book refers a lot of information(It's easy on ebook to click links) that's useful. Readers can use them and find out more on the Internet and use. For Data integration, It's focused to Kafka software that is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system. Additional, It gave why the Big Data Lambda Architecture is good for batch system and a stream processing system and point about things a log can do.

So, Readers will be able to learn:
  • Learn how logs are used for programmatic access in databases and distributed systems
  • Discover solutions to the huge data integration problem when more data of more varieties meet more systems
  • Understand why logs are at the heart of real-time stream processing
  • Learn the role of a log in the internals of online data systems
  • Explore how Jay Kreps applies these ideas to his own work on data infrastructure systems at LinkedIn
Book - I Heart Logs - Event Data, Stream Processing, and Data Integration
Author: Jay Kreps
Categories: DBA Blogs

ODAC 12c Release 3 Beta now available

Christian Shay - Fri, 2014-09-26 11:17

The Oracle Data Access Components (ODAC) 12c Release 3 Beta is now available! This beta introduces numerous new features for Entity Framework and managed ODP.NET.


Entity Framework 6 Code FirstODAC 12c R3 is the first ODP.NET release to certify with Entity Framework (EF) 6 and EF Code First. EF Code First is a popular development model for .NET object-relational mapping. Developers define the application domain model using source code, rather than with a designer or an XML-based configuration file. An EF Code First model's classes are defined in code through Plain Old CLR Objects (POCOs).

This support includes interoperability with the two Entity Framework methods that control the resulting Oracle data type: Data Annotations and Code First Fluent API. Data Annotations permit you to explicitly mark a class property with one or more attributes, whereas the Code First Fluent API permits you to use code rather than attributes to achieve the same goal.

Code First developers can modify and create their database schema as their model changes via EF Code First Migrations. ODP.NET supports EF Code First Migrations through the Visual Studio Package Manager Console commands.

These features are all available in both managed and unmanaged ODP.NET.


New ODP.NET, Managed Driver FeaturesSeveral new managed ODP.NET features have been delivered with this beta. XML DB developers can now use all of ODP.NET's XML classes that are supported by ODP.NET, Unmanaged Driver with the exception of the OracleXmlType.IsFragment and OracleCommand.XmlCommandType properties. This makes Unmanaged ODP.NET XML DB application migration to the managed driver a simple process.

ODP.NET, Managed Driver supports the VARCHAR2, NVARCHAR2, and RAW data types up to 32 KB in size. No code changes are required to use the larger data types, (which are a new Oracle Database 12c feature). By storing more data, developers can use these data types more frequently, providing programming flexibility. In addition, SQL Server to Oracle Database application migration is easier with these new data type sizes.

When using array binding to execute multiple DML statements, ODP.NET, Managed Driver can now provide an array that lists the number of rows affected for each input value from the bound array, rather than just the total number of rows affected. To retrieve the row count, ODP.NET can call the OracleCommand.ArrayBindRowsAffected property. You can use this information to better evaluate the DML's efficiency and whether the data changes were correctly applied.


.NET Framework 4.5.2 and Distributed TransactionsThe ODP.NET managed and unmanaged beta drivers are certified with new .NET Framework 4.5.2 release. .NET 4.5.2 introduces a new Microsoft Distributed Transaction Coordinator feature that allows ODP.NET, Managed Driver to support distributed transactions without requiring you to deploy Oracle.ManagedDataAccessDTC.dll with your application.


More to ComeUp next, the ODAC team plans a NuGet installable managed and unmanaged ODP.NET. Stay tuned to @OracleDOTNET to learn when it is available. We hope to hear your feedback via the OTN discussion forums.

Pages

Subscribe to Oracle FAQ aggregator